FPGARelated.com
Forums

Test Driven Design?

Started by Tim Wescott May 16, 2017
On 19/05/17 01:53, rickman wrote:
> On 5/18/2017 6:06 PM, Tom Gardner wrote: >> On 18/05/17 18:01, rickman wrote: >>> On 5/18/2017 12:14 PM, Tom Gardner wrote: >>>> Also, design patterns that enable logging of events >>>> and states should be encouraged and left in the code >>>> at runtime. I've found them /excellent/ techniques for >>>> correctly deflecting blame onto the other party :) >>>> >>>> Should you design in a proper FSM style/language >>>> and autogenerate the executable source code, or code >>>> directly in the source language? Difficult, but there >>>> are very useful OOP design patterns that make it easy. >>> >>> Designing in anything other than the HDL you are using increases the >>> complexity >>> of backing up your tools. In addition to source code, it can be >>> important to be >>> able to restore the development environment. I don't bother with FSM >>> tools >>> other than tools that help me think. >> >> Very true. I use that argument, and more, to caution >> people against inventing Domain Specific Languages >> when they should be inventing Domain Specific Libraries. >> >> Guess which happened in the case I alluded to above. > > An exception to that rule is programming in Forth. It is a language where > programming *is* extending the language. There are many situations where the > process ends up with programs written what appears to be a domain specific > language, but working quite well. So don't throw the baby out with the bath > when trying to save designers from themselves.
I see why you are saying that, but I disagree. The Forth /language/ is pleasantly simple. The myriad Forth words (e.g. cmove, catch, canonical etc) in most Forth environments are part of the "standard library", not the language per se. Forth words are more-or-less equivalent to functions in a trad language. Defining new words is therefore like defining a new function. Just as defining new words "looks like" defining a DSL, so - at the "application level" - defining new functions also looks like defining a new DSL. Most importantly, both new functions and new words automatically have the invaluable tools support without having to do anything. With a new DSL, all the tools (from parsers to browsers) also have to be built.
>>>> And w.r.t. TDD, should your tests demonstrate the >>>> FSM's design is correct or that the implementation >>>> artefacts are correct? >>> >>> I'll have to say that is a new term to me, "implementation >>> artefacts[sic]". Can >>> you explain? >> >> Nothing non-obvious. An implementation artefact is >> something that is part of /a/ specific design implementation, >> as opposed to something that is an inherent part of >> /the/ problem. > > Why would I want to test design artifacts? The tests in TDD are developed from > the requirements, not the design, right?
Ideally, but only to some extent. TDD frequently used at a much lower level, where it is usually divorced from specs. TDD is also frequently used with - and implemented in the form of - unit tests, which are definitely divorced from the spec. Hence, in the real world, there is bountiful opportunity for diversion from the obvious pure sane course. And Murphy's Law definitely applies. Having said that, both TDD and Unit Testing are valuable additions to a the designer's toolchest. But they must be used intelligently[1], and are merely codifications of things most of us have been doing for decades. No change there, then. [1] be careful of external consultants proselytising the teaching courses they are selling. They have a hammer, and everything /does/ look like a nail.
On 5/19/2017 4:59 AM, Tom Gardner wrote:
> On 19/05/17 01:53, rickman wrote: >> On 5/18/2017 6:06 PM, Tom Gardner wrote: >>> On 18/05/17 18:01, rickman wrote: >>>> On 5/18/2017 12:14 PM, Tom Gardner wrote: >>>>> Also, design patterns that enable logging of events >>>>> and states should be encouraged and left in the code >>>>> at runtime. I've found them /excellent/ techniques for >>>>> correctly deflecting blame onto the other party :) >>>>> >>>>> Should you design in a proper FSM style/language >>>>> and autogenerate the executable source code, or code >>>>> directly in the source language? Difficult, but there >>>>> are very useful OOP design patterns that make it easy. >>>> >>>> Designing in anything other than the HDL you are using increases the >>>> complexity >>>> of backing up your tools. In addition to source code, it can be >>>> important to be >>>> able to restore the development environment. I don't bother with FSM >>>> tools >>>> other than tools that help me think. >>> >>> Very true. I use that argument, and more, to caution >>> people against inventing Domain Specific Languages >>> when they should be inventing Domain Specific Libraries. >>> >>> Guess which happened in the case I alluded to above. >> >> An exception to that rule is programming in Forth. It is a language >> where >> programming *is* extending the language. There are many situations >> where the >> process ends up with programs written what appears to be a domain >> specific >> language, but working quite well. So don't throw the baby out with >> the bath >> when trying to save designers from themselves. > > I see why you are saying that, but I disagree. The > Forth /language/ is pleasantly simple. The myriad > Forth words (e.g. cmove, catch, canonical etc) in most > Forth environments are part of the "standard library", > not the language per se. > > Forth words are more-or-less equivalent to functions > in a trad language. Defining new words is therefore > like defining a new function.
I can't find a definition for "trad language".
> Just as defining new words "looks like" defining > a DSL, so - at the "application level" - defining > new functions also looks like defining a new DSL. > > Most importantly, both new functions and new words > automatically have the invaluable tools support without > having to do anything. With a new DSL, all the tools > (from parsers to browsers) also have to be built.
I have no idea what distinction you are trying to make. Why is making new tools a necessary part of defining a domain specific language? If it walks like a duck... FRONT LED ON TURN That could be the domain specific language under Forth for turning on the front LED of some device. Sure looks like a language to me. I have considered writing a parser for a type of XML file simply by defining the syntax as Forth words. So rather than "process" the file with an application program, the Forth compiler would "compile" the file. I'd call that a domain specific language.
>>>>> And w.r.t. TDD, should your tests demonstrate the >>>>> FSM's design is correct or that the implementation >>>>> artefacts are correct? >>>> >>>> I'll have to say that is a new term to me, "implementation >>>> artefacts[sic]". Can >>>> you explain? >>> >>> Nothing non-obvious. An implementation artefact is >>> something that is part of /a/ specific design implementation, >>> as opposed to something that is an inherent part of >>> /the/ problem. >> >> Why would I want to test design artifacts? The tests in TDD are >> developed from >> the requirements, not the design, right? > > Ideally, but only to some extent. TDD frequently used > at a much lower level, where it is usually divorced > from specs.
There is a failure in the specification process. The projects I have worked on which required a formal requirements development process applied it to every level. So every piece of code that would be tested had requirements which defined the tests.
> TDD is also frequently used with - and implemented in > the form of - unit tests, which are definitely divorced > from the spec.
They are? How then are the tests generated?
> Hence, in the real world, there is bountiful opportunity > for diversion from the obvious pure sane course. And > Murphy's Law definitely applies. > > Having said that, both TDD and Unit Testing are valuable > additions to a the designer's toolchest. But they must > be used intelligently[1], and are merely codifications of > things most of us have been doing for decades. > > No change there, then. > > [1] be careful of external consultants proselytising > the teaching courses they are selling. They have a > hammer, and everything /does/ look like a nail.
-- Rick C
On 05/17/2017 11:33 AM, Tim Wescott wrote:
snip
> > It's basically a bit of structure on top of some common-sense > methodologies (i.e., design from the top down, then code from the bottom > up, and test the hell out of each bit as you code it). >
Other than occasional test fixtures, most of my FPGA work in recent years has been FPGA verification of the digital sections of mixed signal ASICs. Your description sounds exactly like the methodology used on both the product ASIC side and the verification FPGA side. After the FPGA is built and working, you test the hell out of the FPGA system and the product ASIC with completely separate tools and techniques. When problems are discovered, you often fall back to either the ASIC or FPGA simulation test benches to isolate the issue. The importance of good, detailed, self checking, top level test benches cannot be over-stressed. For mid and low level blocks that are complex or likely to see significant iterations (due to design spec changes) self checking test benches are worth the effort. My experience with manual checking test benches is that the first time you go through it, you remember to examine all the important spots, the thoroughness of the manual checking on subsequent runs falls off fast. Giving a manual check test bench to someone else, is a waste of both of your time. BobH
I've solved the problem with setting up a new project for each testbench by not using any projects. Vivado has a non project mode when you write a simple tcl script which tells vivado what sources to use and what to do with them.

I have a source directory with hdl files in our repository and dozens of scripts.Each script takes sources from the same directory and creates its own temp working directory and runs its test there. I also have a script which runs all the tests at once without GUI. I run it right before coming home. When I come at work in the next morning I run a script which analyses reports looking for errors. If there is an error somewhere, I run the corresponding test script with GUI switched on to look at waveforms.

Non-project mode not only allows me to run different tests simultaneously for the same sources, but also allows me to run multiple synthesis for them. 

I use only this mode for more then 2 years and absolutely happy with that. Highly recommend!
On 5/19/2017 6:31 PM, Ilya Kalistru wrote:
> I've solved the problem with setting up a new project for each testbench by not using any projects. Vivado has a non project mode when you write a simple tcl script which tells vivado what sources to use and what to do with them. > > I have a source directory with hdl files in our repository and dozens of scripts.Each script takes sources from the same directory and creates its own temp working directory and runs its test there. I also have a script which runs all the tests at once without GUI. I run it right before coming home. When I come at work in the next morning I run a script which analyses reports looking for errors. If there is an error somewhere, I run the corresponding test script with GUI switched on to look at waveforms. > > Non-project mode not only allows me to run different tests simultaneously for the same sources, but also allows me to run multiple synthesis for them. > > I use only this mode for more then 2 years and absolutely happy with that. Highly recommend!
Interesting. Vivado is what, Xilinx? -- Rick C
Den lørdag den 20. maj 2017 kl. 00.57.24 UTC+2 skrev rickman:
> On 5/19/2017 6:31 PM, Ilya Kalistru wrote: > > I've solved the problem with setting up a new project for each testbench by not using any projects. Vivado has a non project mode when you write a simple tcl script which tells vivado what sources to use and what to do with them. > > > > I have a source directory with hdl files in our repository and dozens of scripts.Each script takes sources from the same directory and creates its own temp working directory and runs its test there. I also have a script which runs all the tests at once without GUI. I run it right before coming home. When I come at work in the next morning I run a script which analyses reports looking for errors. If there is an error somewhere, I run the corresponding test script with GUI switched on to look at waveforms. > > > > Non-project mode not only allows me to run different tests simultaneously for the same sources, but also allows me to run multiple synthesis for them. > > > > I use only this mode for more then 2 years and absolutely happy with that. Highly recommend! > > Interesting. Vivado is what, Xilinx?
yes
Yes. It is xilinx vivado.

Another important advantage of non-project mode is that it is fully compatible with source control systems. When you don't have projects, you don't have piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl sources and tcl scripts. Therefore all information is stored in source control system but when you commit changes you commit only changes you have done, not random changes of unknown project files.

In this situation work with IP cores a bit trickier, but not much. Considering that you don't change ip's very often, it's not a problem at all.

I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publish it?
On 5/20/2017 3:11 AM, Ilya Kalistru wrote:
> Yes. It is xilinx vivado. > > Another important advantage of non-project mode is that it is fully compatible with source control systems. When you don't have projects, you don't have piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl sources and tcl scripts. Therefore all information is stored in source control system but when you commit changes you commit only changes you have done, not random changes of unknown project files. > > In this situation work with IP cores a bit trickier, but not much. Considering that you don't change ip's very often, it's not a problem at all. > > I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publish it?
Doesn't the tool still generate all the intermediate files? The Lattice tool (which uses Synplify for synthesis) creates a huge number of files that only the tools look at. They aren't really project files, they are various intermediate files. Living in the project main directory they really get in the way. -- Rick C
On 20/05/17 08:11, Ilya Kalistru wrote:
> Yes. It is xilinx vivado. > > Another important advantage of non-project mode is that it is fully compatible with source control systems. When you don't have projects, you don't have piles of junk files of unknown purpose that changes every time you open a project or run a simulation. In non-project mode you have only hdl sources and tcl scripts. Therefore all information is stored in source control system but when you commit changes you commit only changes you have done, not random changes of unknown project files. > > In this situation work with IP cores a bit trickier, but not much. Considering that you don't change ip's very often, it's not a problem at all. > > I see that very small number of hdl designers know and use this mode. Maybe I should write an article about it. Where it would be appropriate to publish it?
That would be useful; the project mode is initially appealing, but the splattered files and SCCS give me the jitters. Publish it everywhere! Any blog and bulletin board you can find, not limited to those dedicated to Xilinx.
Ilya Kalistru <stebanoid@gmail.com> wrote:
> I've solved the problem with setting up a new project for each testbench > by not using any projects. Vivado has a non project mode when you write a > simple tcl script which tells vivado what sources to use and what to do > with them.
Something similar is possible with Intel FPGA (Altera) Quartus. You need one tcl file for settings, and building is a few commands which we run from a Makefile. All our builds run in continuous integration, which extracts logs and timing/area numbers. The bitfiles then get downloaded and booted on FPGA, then the test suite and benchmarks are run automatically to monitor performance. Numbers then come back to continuous integration for graphing. Theo