FPGARelated.com
Forums

Test Driven Design?

Started by Tim Wescott May 16, 2017
On 18/05/17 15:22, Tim Wescott wrote:
> On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote: > >> Tim Wescott <tim@seemywebsite.really> wrote: >>> So, you have two separate implementations of the system -- how do you >>> know that they aren't both identically buggy? >> >> Is that the problem with any testing framework? >> Quis custodiet ipsos custodes? >> Who tests the tests? >> >>> Or is it that one is carefully constructed to be clear and easy to >>> understand (and therefor review) while the other is constructed to >>> optimize over whatever constraints you want (size, speed, etc.)? >> >> Essentially that. You can write a functionally correct but slow >> implementation (completely unpipelined, for instance). You can write an >> implementation that relies on things that aren't available in hardware >> (a+b*c is easy for the simulator to check, but the hardware >> implementation in IEEE floating point is somewhat more complex). You >> can also write high level checks that don't know about implementation >> (if I enqueue E times and dequeue D times to this FIFO, the current fill >> should always be E-D) >> >> It helps if they're written by different people - eg we have 3 >> implementations of the ISA (hardware, emulator, formal model, plus the >> spec and the test suite) that are used to shake out ambiguities: specify >> first, write tests, three people implement without having seen the >> tests, see if they differ. Fix the problems, write tests to cover the >> corner cases. Rinse and repeat. >> >> Theo > > It's a bit different on the software side -- there's a lot more of "poke > it THIS way, see if it squeaks THAT way". Possibly the biggest value is > that (in software at least, but I suspect in hardware) it encourages you > to keep any stateful information simple, just to make the tests simple -- > and pure functions are, of course, the easiest. > > I need to think about how this applies to my baby-steps project I'm > working on, if at all.
Interesting questions with FSMs implemented in software... Which of the many implementation patterns should you choose? My preference is anything that avoids deeply nested if/the/else/switch statements, since they rapidly become a maintenance nightmare. (I've seen nesting 10 deep!). Also, design patterns that enable logging of events and states should be encouraged and left in the code at runtime. I've found them /excellent/ techniques for correctly deflecting blame onto the other party :) Should you design in a proper FSM style/language and autogenerate the executable source code, or code directly in the source language? Difficult, but there are very useful OOP design patterns that make it easy. And w.r.t. TDD, should your tests demonstrate the FSM's design is correct or that the implementation artefacts are correct? Naive unit tests often end up testing the individual low-level implementation artefacts, not the design. Those are useful when refactoring, but otherwise are not sufficient.
On 5/18/2017 12:14 PM, Tom Gardner wrote:
> On 18/05/17 15:22, Tim Wescott wrote: >> On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote: >> >>> Tim Wescott <tim@seemywebsite.really> wrote: >>>> So, you have two separate implementations of the system -- how do you >>>> know that they aren't both identically buggy? >>> >>> Is that the problem with any testing framework? >>> Quis custodiet ipsos custodes? >>> Who tests the tests? >>> >>>> Or is it that one is carefully constructed to be clear and easy to >>>> understand (and therefor review) while the other is constructed to >>>> optimize over whatever constraints you want (size, speed, etc.)? >>> >>> Essentially that. You can write a functionally correct but slow >>> implementation (completely unpipelined, for instance). You can write an >>> implementation that relies on things that aren't available in hardware >>> (a+b*c is easy for the simulator to check, but the hardware >>> implementation in IEEE floating point is somewhat more complex). You >>> can also write high level checks that don't know about implementation >>> (if I enqueue E times and dequeue D times to this FIFO, the current fill >>> should always be E-D) >>> >>> It helps if they're written by different people - eg we have 3 >>> implementations of the ISA (hardware, emulator, formal model, plus the >>> spec and the test suite) that are used to shake out ambiguities: specify >>> first, write tests, three people implement without having seen the >>> tests, see if they differ. Fix the problems, write tests to cover the >>> corner cases. Rinse and repeat. >>> >>> Theo >> >> It's a bit different on the software side -- there's a lot more of "poke >> it THIS way, see if it squeaks THAT way". Possibly the biggest value is >> that (in software at least, but I suspect in hardware) it encourages you >> to keep any stateful information simple, just to make the tests simple -- >> and pure functions are, of course, the easiest. >> >> I need to think about how this applies to my baby-steps project I'm >> working on, if at all. > > Interesting questions with FSMs implemented in software... > > Which of the many implementation patterns should > you choose?
Personally, I custom design FSM code without worrying about what it would be called. There really are only two issues. The first is whether you can afford a clock delay in the output and how that impacts your output assignments. The second is the complexity of the code (maintenance).
> My preference is anything that avoids deeply nested > if/the/else/switch statements, since they rapidly > become a maintenance nightmare. (I've seen nesting > 10 deep!).
Such deep layering likely indicates a poor problem decomposition, but it is hard to say without looking at the code. Normally there is a switch for the state variable and conditionals within each case to evaluate inputs. Typically this is not so complex.
> Also, design patterns that enable logging of events > and states should be encouraged and left in the code > at runtime. I've found them /excellent/ techniques for > correctly deflecting blame onto the other party :) > > Should you design in a proper FSM style/language > and autogenerate the executable source code, or code > directly in the source language? Difficult, but there > are very useful OOP design patterns that make it easy.
Designing in anything other than the HDL you are using increases the complexity of backing up your tools. In addition to source code, it can be important to be able to restore the development environment. I don't bother with FSM tools other than tools that help me think.
> And w.r.t. TDD, should your tests demonstrate the > FSM's design is correct or that the implementation > artefacts are correct?
I'll have to say that is a new term to me, "implementation artefacts[sic]". Can you explain? I test behavior. Behavior is what is specified for a design, so why would you test anything else?
> Naive unit tests often end up testing the individual > low-level implementation artefacts, not the design. > Those are useful when refactoring, but otherwise > are not sufficient.
-- Rick C
On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote:
> Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos: >> Tim Wescott <tim@seemywebsite.really> wrote: >>> So, you have two separate implementations of the system -- how do you >>> know that they aren't both identically buggy? >> >> Is that the problem with any testing framework? >> Quis custodiet ipsos custodes? >> Who tests the tests? > > the test? > > if two different implementations agree, it adds a bit more confidence that an > implementation agreeing with itself.
The point is if both designs were built with the same misunderstanding of the requirements, they could both be wrong. While not common, this is not unheard of. It could be caused by cultural biases (each company is a culture) or a poorly written specification. -- Rick C
On Thu, 18 May 2017 13:05:40 -0400, rickman wrote:

> On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote: >> Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos: >>> Tim Wescott <tim@seemywebsite.really> wrote: >>>> So, you have two separate implementations of the system -- how do you >>>> know that they aren't both identically buggy? >>> >>> Is that the problem with any testing framework? >>> Quis custodiet ipsos custodes? >>> Who tests the tests? >> >> the test? >> >> if two different implementations agree, it adds a bit more confidence >> that an implementation agreeing with itself. > > The point is if both designs were built with the same misunderstanding > of the requirements, they could both be wrong. While not common, this > is not unheard of. It could be caused by cultural biases (each company > is a culture) or a poorly written specification.
Yup. Although testing the real, obscure and complicated thing against the fake, easy to read and understand thing does sound like a viable test, too. Prolly should both hit the thing with known test vectors written against the spec, and do the behavioral vs. actual sim, too. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
On Tue, 16 May 2017 15:21:49 -0500
Tim Wescott <tim@seemywebsite.really> wrote:

> Anyone doing any test driven design for FPGA work?
If you do hardware design with an interpretive language, then test driven design is essential: http://docs.myhdl.org/en/stable/manual/unittest.html My hobby project is long and slow, but I think this discipline is slowly improving my productivity. Jan Coombs
On 18/05/17 18:01, rickman wrote:
> On 5/18/2017 12:14 PM, Tom Gardner wrote: >> On 18/05/17 15:22, Tim Wescott wrote: >>> On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote: >>> >>>> Tim Wescott <tim@seemywebsite.really> wrote: >>>>> So, you have two separate implementations of the system -- how do you >>>>> know that they aren't both identically buggy? >>>> >>>> Is that the problem with any testing framework? >>>> Quis custodiet ipsos custodes? >>>> Who tests the tests? >>>> >>>>> Or is it that one is carefully constructed to be clear and easy to >>>>> understand (and therefor review) while the other is constructed to >>>>> optimize over whatever constraints you want (size, speed, etc.)? >>>> >>>> Essentially that. You can write a functionally correct but slow >>>> implementation (completely unpipelined, for instance). You can write an >>>> implementation that relies on things that aren't available in hardware >>>> (a+b*c is easy for the simulator to check, but the hardware >>>> implementation in IEEE floating point is somewhat more complex). You >>>> can also write high level checks that don't know about implementation >>>> (if I enqueue E times and dequeue D times to this FIFO, the current fill >>>> should always be E-D) >>>> >>>> It helps if they're written by different people - eg we have 3 >>>> implementations of the ISA (hardware, emulator, formal model, plus the >>>> spec and the test suite) that are used to shake out ambiguities: specify >>>> first, write tests, three people implement without having seen the >>>> tests, see if they differ. Fix the problems, write tests to cover the >>>> corner cases. Rinse and repeat. >>>> >>>> Theo >>> >>> It's a bit different on the software side -- there's a lot more of "poke >>> it THIS way, see if it squeaks THAT way". Possibly the biggest value is >>> that (in software at least, but I suspect in hardware) it encourages you >>> to keep any stateful information simple, just to make the tests simple -- >>> and pure functions are, of course, the easiest. >>> >>> I need to think about how this applies to my baby-steps project I'm >>> working on, if at all. >> >> Interesting questions with FSMs implemented in software... >> >> Which of the many implementation patterns should >> you choose? > > Personally, I custom design FSM code without worrying about what it would be > called. There really are only two issues. The first is whether you can afford > a clock delay in the output and how that impacts your output assignments. The > second is the complexity of the code (maintenance). > > >> My preference is anything that avoids deeply nested >> if/the/else/switch statements, since they rapidly >> become a maintenance nightmare. (I've seen nesting >> 10 deep!). > > Such deep layering likely indicates a poor problem decomposition, but it is hard > to say without looking at the code.
It was a combination of technical and personnel factors. The overriding business imperative was, at each stage, to make the smallest and /incrementally/ cheapest modification. The road to hell is paved with good intentions.
> Normally there is a switch for the state variable and conditionals within each > case to evaluate inputs. Typically this is not so complex.
This was an inherently complex task that was ineptly implemented. I'm not going to define how ineptly, because you wouldn't believe it. I only believe it because I saw it, and boggled.
>> Also, design patterns that enable logging of events >> and states should be encouraged and left in the code >> at runtime. I've found them /excellent/ techniques for >> correctly deflecting blame onto the other party :) >> >> Should you design in a proper FSM style/language >> and autogenerate the executable source code, or code >> directly in the source language? Difficult, but there >> are very useful OOP design patterns that make it easy. > > Designing in anything other than the HDL you are using increases the complexity > of backing up your tools. In addition to source code, it can be important to be > able to restore the development environment. I don't bother with FSM tools > other than tools that help me think.
Very true. I use that argument, and more, to caution people against inventing Domain Specific Languages when they should be inventing Domain Specific Libraries. Guess which happened in the case I alluded to above.
>> And w.r.t. TDD, should your tests demonstrate the >> FSM's design is correct or that the implementation >> artefacts are correct? > > I'll have to say that is a new term to me, "implementation artefacts[sic]". Can > you explain?
Nothing non-obvious. An implementation artefact is something that is part of /a/ specific design implementation, as opposed to something that is an inherent part of /the/ problem.
> I test behavior. Behavior is what is specified for a design, so why would you > test anything else?
Clearly you haven't practiced XP/Agile/Lean development practices. You sound like a 20th century hardware engineer, rather than a 21st century software "engineer". You must learn to accept that all new things are, in every way, better than the old ways. Excuse me while I go and wash my mouth out with soap.
>> Naive unit tests often end up testing the individual >> low-level implementation artefacts, not the design. >> Those are useful when refactoring, but otherwise >> are not sufficient.
On 18/05/17 18:05, rickman wrote:
> On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote: >> Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos: >>> Tim Wescott <tim@seemywebsite.really> wrote: >>>> So, you have two separate implementations of the system -- how do you >>>> know that they aren't both identically buggy? >>> >>> Is that the problem with any testing framework? >>> Quis custodiet ipsos custodes? >>> Who tests the tests? >> >> the test? >> >> if two different implementations agree, it adds a bit more confidence that an >> implementation agreeing with itself. > > The point is if both designs were built with the same misunderstanding of the > requirements, they could both be wrong. While not common, this is not unheard > of. It could be caused by cultural biases (each company is a culture) or a > poorly written specification.
The prior question is whether the specification is correct. Or more realistically, to what extent it is/isn't correct, and the best set of techniques and processes for reducing the imperfection. And that leads to XP/Agile concepts, to deal with the suboptimal aspects of Waterfall Development. Unfortunately the zealots can't accept that what you gain on the swings you lose on the roundabouts.
On 18/05/17 19:03, Jan Coombs wrote:
> On Tue, 16 May 2017 15:21:49 -0500 > Tim Wescott <tim@seemywebsite.really> wrote: > >> Anyone doing any test driven design for FPGA work? > > If you do hardware design with an interpretive language, then > test driven design is essential: > > http://docs.myhdl.org/en/stable/manual/unittest.html > > My hobby project is long and slow, but I think this discipline > is slowly improving my productivity.
It doesn't matter in the slightest whether or not the language is interpreted. Consider that, for example, C is (usually) compiled to assembler. That assembler is then interpreted by microcode (or more modern equivalent!) into RISC operations, which is then interpreted by hardware.
On 5/18/2017 6:10 PM, Tom Gardner wrote:
> On 18/05/17 18:05, rickman wrote: >> On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote: >>> Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos: >>>> Tim Wescott <tim@seemywebsite.really> wrote: >>>>> So, you have two separate implementations of the system -- how do you >>>>> know that they aren't both identically buggy? >>>> >>>> Is that the problem with any testing framework? >>>> Quis custodiet ipsos custodes? >>>> Who tests the tests? >>> >>> the test? >>> >>> if two different implementations agree, it adds a bit more confidence >>> that an >>> implementation agreeing with itself. >> >> The point is if both designs were built with the same misunderstanding >> of the >> requirements, they could both be wrong. While not common, this is not >> unheard >> of. It could be caused by cultural biases (each company is a culture) >> or a >> poorly written specification. > > The prior question is whether the specification is correct. > > Or more realistically, to what extent it is/isn't correct, > and the best set of techniques and processes for reducing > the imperfection. > > And that leads to XP/Agile concepts, to deal with the suboptimal > aspects of Waterfall Development. > > Unfortunately the zealots can't accept that what you gain > on the swings you lose on the roundabouts.
I'm sure you know exactly what you meant. :) -- Rick C
On 5/18/2017 6:06 PM, Tom Gardner wrote:
> On 18/05/17 18:01, rickman wrote: >> On 5/18/2017 12:14 PM, Tom Gardner wrote: >> >>> My preference is anything that avoids deeply nested >>> if/the/else/switch statements, since they rapidly >>> become a maintenance nightmare. (I've seen nesting >>> 10 deep!). >> >> Such deep layering likely indicates a poor problem decomposition, but >> it is hard >> to say without looking at the code. > > It was a combination of technical and personnel factors. > The overriding business imperative was, at each stage, > to make the smallest and /incrementally/ cheapest modification. > > The road to hell is paved with good intentions.
If we are bandying about platitudes I will say, penny wise, pound foolish.
>> Normally there is a switch for the state variable and conditionals >> within each >> case to evaluate inputs. Typically this is not so complex. > > This was an inherently complex task that was ineptly > implemented. I'm not going to define how ineptly, > because you wouldn't believe it. I only believe it > because I saw it, and boggled.
Good design is about simplifying the complex. Ineptitude is a separate issue and can ruin even simple designs.
>>> Also, design patterns that enable logging of events >>> and states should be encouraged and left in the code >>> at runtime. I've found them /excellent/ techniques for >>> correctly deflecting blame onto the other party :) >>> >>> Should you design in a proper FSM style/language >>> and autogenerate the executable source code, or code >>> directly in the source language? Difficult, but there >>> are very useful OOP design patterns that make it easy. >> >> Designing in anything other than the HDL you are using increases the >> complexity >> of backing up your tools. In addition to source code, it can be >> important to be >> able to restore the development environment. I don't bother with FSM >> tools >> other than tools that help me think. > > Very true. I use that argument, and more, to caution > people against inventing Domain Specific Languages > when they should be inventing Domain Specific Libraries. > > Guess which happened in the case I alluded to above.
An exception to that rule is programming in Forth. It is a language where programming *is* extending the language. There are many situations where the process ends up with programs written what appears to be a domain specific language, but working quite well. So don't throw the baby out with the bath when trying to save designers from themselves.
>>> And w.r.t. TDD, should your tests demonstrate the >>> FSM's design is correct or that the implementation >>> artefacts are correct? >> >> I'll have to say that is a new term to me, "implementation >> artefacts[sic]". Can >> you explain? > > Nothing non-obvious. An implementation artefact is > something that is part of /a/ specific design implementation, > as opposed to something that is an inherent part of > /the/ problem.
Why would I want to test design artifacts? The tests in TDD are developed from the requirements, not the design, right?
>> I test behavior. Behavior is what is specified for a design, so why >> would you >> test anything else? > > Clearly you haven't practiced XP/Agile/Lean development > practices. > > You sound like a 20th century hardware engineer, rather > than a 21st century software "engineer". You must learn > to accept that all new things are, in every way, better > than the old ways. > > Excuse me while I go and wash my mouth out with soap.
Lol -- Rick C