FPGARelated.com
Forums

Finally! A Completely Open Complete FPGA Toolchain

Started by rickman July 27, 2015
On 11/08/15 13:20, Walter Banks wrote:
> On 11/08/2015 2:32 AM, David Brown wrote: >> On 11/08/15 02:51, DJ Delorie wrote: >>> >>> rickman <gnuarm@gmail.com> writes: >>>> If FOSS compilers for CPUs have mostly limited code to subsets >>>> of instructions to make the compiler easier to code and maintain >>>> that's fine. >>> >>> As one of the GCC maintainers, I can tell you that the opposite is >>> true. We take advantage of everything the ISA offers. >>> >> >> My guess is that Walter's experience here is with SDCC rather than >> gcc, since he writes compilers that - like SDCC - target small, >> awkward 8-bit architectures. In that world, there are often many >> variants of the cpu - the 8051 is particularly notorious - and >> getting the best out of these devices often means making sure you use >> the extra architectural features your particular device provides. >> SDCC is an excellent tool, but as Walter says it works with various >> subsets of ISA provided by common 8051, Z80, etc., variants. The big >> commercial toolchains for such devices, such as from Keil, IAR and >> Walter's own Bytecraft, provide better support for the range of >> commercially available parts. > > That frames the point I was making about bitstream information. My > limited understanding of the issue is getting the bitstream information > correct for a specific part goes beyond getting the internal > interconnects being functional and goes to issues dealing with timing, > power, gate position and data loads. > > It is not saying that FOSS couldn't or shouldn't do it but it would > change a lot of things in both the FOSS and fpga world. The chip > companies have traded speed for detail complexity. In the same way that > speed has been traded for ISA use restrictions (specific instruction > combinations) in many of the embedded system processors we have supported. >
This is not really a FOSS / Closed software issue (despite the thread). Bitstream information in FPGA's is not really suitable for /any/ third parties - it doesn't matter significantly if they are open or closed development. When an FPGA company makes a new design, there will be automatic flow of the details from the FPGA design details into the placer/router/generator software - the information content and the detail is far too high to deal sensibly with documentation or any other interchange between significantly separated groups. Though I have no "inside information" about how FPGA companies do their development, I would expect there is a great deal of back-and-forth work between the hardware designers, the software designers, and the groups testing simulations to figure out how well the devices work in practice. Whereas with a cpu design, the ISA is at least mostly fixed early in the design process, and also the chip can be simulated and tested without compilers or anything more than a simple assembler, for FPGA's your bitstream will not be solidified until the final hardware design is complete, and you are totally dependent on the placer/router/generator software while doing the design. All this means that it is almost infeasible for anyone to make a sensible third-party generator, at least for large FPGAs. And the FPGA manufacturers cannot avoid making such tools anyway. At best, third-parties (FOSS or not) can hope to make limited bitstream models of a few small FPGAs, and get something that works but is far from optimal for the device. Of course, there are many interesting ideas that can come out of even such limited tools as this, so it is still worth making them and "opening" the bitstream models for a few small FPGAs. For some uses, it is an advantage that all software in the chain is open source, even if the result is not as speed or space optimal. For academic use, it makes research and study much easier, and can lead to new ideas or algorithms for improving the FPGA development process. And you can do weird things - I remember long ago reading of someone who used a genetic algorithm on bitstreams for a small FPGA to make a filter system without actually knowing /how/ it worked!
On 8/10/2015 8:51 PM, DJ Delorie wrote:
> > rickman <gnuarm@gmail.com> writes: >> If FOSS compilers for CPUs have mostly limited code to subsets of >> instructions to make the compiler easier to code and maintain that's >> fine. > > As one of the GCC maintainers, I can tell you that the opposite is true. > We take advantage of everything the ISA offers.
You are replying to the wrong person. I was not saying GCC limited the instruction set used, I was positing a reason for Walter Bank's claim this was true. My point is that there are different pressures in compiling for FPGAs and CPUs. -- Rick
On 8/11/2015 5:14 AM, David Brown wrote:
> On 11/08/15 10:59, Theo Markettos wrote: >> DJ Delorie <dj@delorie.com> wrote: >>> >>> rickman <gnuarm@gmail.com> writes: >>>> If FOSS compilers for CPUs have mostly limited code to subsets of >>>> instructions to make the compiler easier to code and maintain that's >>>> fine. >>> >>> As one of the GCC maintainers, I can tell you that the opposite is true. >>> We take advantage of everything the ISA offers. >> >> But the point is the ISA is the software-level API for the processor. >> There's a lot more fancy stuff in the microarchitecture that you don't get >> exposed to as a compiler writer[1]. The contract between programmers and >> the CPU vendor is the vendor will implement the ISA API, and software >> authors can be confident their software will work.[2] >> >> You don't get exposed to things like branch latency, pipeline hazards, >> control flow graph dependencies, and so on, because microarchitectural >> techniques like branch predictors, register renaming and out-of-order >> execution do a massive amount of work to hide those details from the >> software world. > > As you note below, that is true regarding the functional execution > behaviour - but not regarding the speed. For many targets, gcc can take > such non-ISA details into account as well as a large proportion of the > device-specific ISA (contrary to what Walter thought).
I'm not clear on what is being said about speed. It is my understanding that compiler writers often consider the speed of the output and try hard to optimize that for each particular generation of processor ISA or even versions of processor with the same ISA. So don't see that as being particularly different from FPGAs. Sure, FPGAs require a *lot* of work to get routing to meet timing. That is the primary purpose of one of the three steps in FPGA design tools, compile, place, route. I don't see this as fundamentally different from CPU compilers in a way that affects the FOSS issue.
>> The nearest we came is VLIW designs like Itanium where more >> microarchitectural detail was exposed to the compiler - which turned out to >> be very painful for the compiler writer. >> >> There is no such API for FPGAs - the compiler has to drive the raw >> transistors to set up the routing for the exact example of the chip being >> programmed. Not only that, there are no safeguards - if you drive those >> transistors wrong, your chip catches fire. >> > > Indeed. The bitstream and the match between configuration bits and > functionality in an FPGA do not really correspond to cpu's ISA. They > are at a level of detail and complexity that is /way/ beyond an ISA.
I think that is not a useful distinction. If you include all aspects of writing compilers, the ISA has to be supplemented by other information to get good output code. If you only consider the ISA your code will never be very good. In the end the only useful distinction between the CPU tools and FPGA tools are that FPGA users are, in general, not as capable in modifying the tools. -- Rick
On Tuesday, August 11, 2015 at 3:59:22 AM UTC-5, Theo Markettos wrote:
> DJ Delorie <dj@....com> wrote: > > > > rickman <gnuarm@....com> writes: > > > If FOSS compilers for CPUs have mostly limited code to subsets of > > > instructions to make the compiler easier to code and maintain that's > > > fine. > > > > As one of the GCC maintainers, I can tell you that the opposite is true. > > We take advantage of everything the ISA offers. > > But the point is the ISA is the software-level API for the processor. > There's a lot more fancy stuff in the microarchitecture that you don't get > exposed to as a compiler writer[1]. The contract between programmers and > the CPU vendor is the vendor will implement the ISA API, and software > authors can be confident their software will work.[2] > > You don't get exposed to things like branch latency, pipeline hazards, > control flow graph dependencies, and so on, because microarchitectural > techniques like branch predictors, register renaming and out-of-order > execution do a massive amount of work to hide those details from the > software world. > > The nearest we came is VLIW designs like Itanium where more > microarchitectural detail was exposed to the compiler - which turned out to > be very painful for the compiler writer. > > There is no such API for FPGAs - the compiler has to drive the raw > transistors to set up the routing for the exact example of the chip being > programmed. Not only that, there are no safeguards - if you drive those > transistors wrong, your chip catches fire. > > Theo > > > [1] There is a certain amount of performance tweaking you can do with > knowledge of caching, prefetching, etc - but you rarely have the problem of > functional correctness; the ISA is not violated, even if slightly slower > > [2] To a greater or lesser degree - Intel takes this to extremes, > supporting binary compatibility of OSes back to the 1970s; ARM requires the > OS to co-evolve but userland programs are (mostly) unchanged
One could make the analogy that a FPGA's ISA is the LUT, register, ALU & RAM primitives that the mapper generates from the EDIF. There is no suitable analogy for the router phase of bitstream generation. The router resources are a hierarchy of variable length wires in an assortment of directions (horizontal, vertical, sometimes diagonal) with pass transistors used to connect wires, source and destinations. Timing driven place & route is easy to express, difficult to implement. Register and/or logic replication may be performed to improve timing. There are some open? router tools at Un. Toronto: http://www.eecg.toronto.edu/~jayar/software/software.html Jim Brakefield
On 11.08.2015 08:32, David Brown wrote:
> On 11/08/15 02:51, DJ Delorie wrote: >> >> rickman <gnuarm@gmail.com> writes: >>> If FOSS compilers for CPUs have mostly limited code to subsets of >>> instructions to make the compiler easier to code and maintain that's >>> fine. >> >> As one of the GCC maintainers, I can tell you that the opposite is true. >> We take advantage of everything the ISA offers. >> > > My guess is that Walter's experience here is with SDCC rather than gcc, > since he writes compilers that - like SDCC - target small, awkward 8-bit > architectures. In that world, there are often many variants of the cpu > - the 8051 is particularly notorious - and getting the best out of these > devices often means making sure you use the extra architectural features > your particular device provides. SDCC is an excellent tool, but as > Walter says it works with various subsets of ISA provided by common > 8051, Z80, etc., variants. The big commercial toolchains for such > devices, such as from Keil, IAR and Walter's own Bytecraft, provide > better support for the range of commercially available parts. > > gcc is in a different world - it is a much bigger compiler suite, with > more developers than SDCC, and a great deal more support from the cpu > manufacturers and other commercial groups. One does not need to dig > further than the manual pages to see the huge range of options for > optimising use of different variants of many of targets it supports - > including not just use of differences in the ISA, but also differences > in timings and instruction scheduling. >
I'd say the SDCC situation is more complex, and it seems to do uite well compared to other compilers for the same architectures. On one hand, SDCC always has had few developers. It has some quite advanced optimizations, but one the other hand it is lacking in some standard optimizations and features (SDCC's pointer analysis is not that good, we don't have generalized constant propagation yet, there are some standard C features still missing - see below, after the discussion of the ports). IMO, the bigest weaknesses are there, and not in the use of exotic instructions. The 8051 has many variants, and SDCC currently does not support some of the advanced features available in some of them, such as 4 dptrs, etc. I do not know how SDCC compares to non-free compilers in that respect. The Z80 is already a bit different. We use the differences in the instruction sets of the Z80, Z180, LR35902, Rabbit, TLCS-90. SDCC does not use the undocumented instructions available in some Z80 variants, and does not use the alternate register set for code generation; there definitely is potential for further improvement, but: Last time I did a comparison of compilers for these architectures, IAR was the only one that did better than SDCC for some of them. Newer architectures supported by SDCC are the Freescale HC08, S08 and the STMicroelectronics STM8. The non-free compilers for these targets seem to be able to often generate better code, but SDCC is not far behind. The SDCC PIC backends are not up to the standard of the others. In terms of standard complaince, IMO, SDCC is dong better than the non-free compilers, with the exception of IAR. Most non-free compilers support something resembling C90 with a few deviations from the standard, IAR seems to support mostly standard C99. SDCC has a few gaps, even in C90 (such as K&R functions and assignment of structs). ON th other hand, SDC supports most of the new features of C99 and C11 (the only missing feature introduced in C11 seems to be UTF-8 strings). Philipp
On 8/13/2015 6:07 AM, Philipp Klaus Krause wrote:
> The SDCC PIC backends are not up to the standard of the others.
Is the PIC too much of an odd ball to keep up with or is there no future in 8-bit PIC ? or are 32-bit chips more fun ? If there is a better place to discuss this, please let me know.
On 8/13/2015 10:11 AM, hamilton wrote:
> On 8/13/2015 6:07 AM, Philipp Klaus Krause wrote: >> The SDCC PIC backends are not up to the standard of the others. > > Is the PIC too much of an odd ball to keep up with > or > is there no future in 8-bit PIC ? > or > are 32-bit chips more fun ? > > If there is a better place to discuss this, please let me know.
I don't know tons about the 32 bit chips which are mostly ARMs. But the initialization is more complex. It is a good idea to let the tools handle that for you. All of the 8 bit chips I've used were very simple to get off the ground. -- Rick
On 13.08.2015 16:11, hamilton wrote:
> On 8/13/2015 6:07 AM, Philipp Klaus Krause wrote: >> The SDCC PIC backends are not up to the standard of the others. > > Is the PIC too much of an odd ball to keep up with > or > is there no future in 8-bit PIC ? > or > are 32-bit chips more fun ?
I don't consider 32-bit chips more fun. I like CISC 8-bitters, but I prefer those that seem better suited for C. Again SDCC has few developers, and at least recently, the most active ones don't seem that interested in the pics. Also, the situation is quite different between the pic14 and pic16 backends. The pic16 backend is not that bad. If someone puts a few weeks of work into it, it could probably make it up to the standard of the other ports in terms of correctness; it already passes large parts of the regular regression test suite. The pic14 would require much more work.
> > If there is a better place to discuss this, please let me know. >
The sdcc-user and sdcc-devel seem a better place than comp.arch.fpga. Philipp
> Again SDCC has few > developers, and at least recently, the most active ones don't seem that > interested in the pics. >
Back to the topic of the open FPGA tool chain, I think there would be many "PICs", i.e. topics which are addressed by no / too few developers. But the whole discussion is quite theoretical as long as A & X do not open their bitstream formats. And I do not think that they will do anything that will support an open source solution, as software is the main entry obstacle for FPGA startups. If there would be a flexible open-source tool-chain with large developer and user-base that can be ported to new architectures easily, this would make it much easier for new competition. (Think gcc...) Also (as mentioned above) I think with the good and free tool chains from the suppliers, their would be not much demand for such a open source tool chain. There are other points where I would see more motiviation and even there is not happening much: - Good open source Verilog/VHDL editor (Yes, I have heard of Emacs...) as the integrated editors are average (Altera) or bad (Xilinx). (Currently I am evaluating two commercial VHDL editors...) - A kind of graphical editor for VHDL and Verilog as the top/higher levels of bigger projects are often a pain IMHO (like writing netlists by hand). I would even start such a project myself if I had the time... But even with such things where I think would be quite some demand, the "critical mass" of the FPGA community is too low to get projects started and especially keep them running. Thomas
On 8/13/15 9:44 PM, thomas.entner99@gmail.com wrote:
>> Again SDCC has few >> developers, and at least recently, the most active ones don't seem that >> interested in the pics. >> > Back to the topic of the open FPGA tool chain, I think there would > bemany "PICs", i.e. topics which are addressed by no / too few developers. > > But the whole discussion is quite theoretical as long as A & X do > not open their bitstream formats. And I do not think that they will do > anything that will support an open source solution, as software is the > main entry obstacle for FPGA startups. If there would be a flexible > open-source tool-chain with large developer and user-base that can be > ported to new architectures easily, this would make it much easier for > new competition. (Think gcc...) > > Also (as mentioned above) I think with the good and free tool chains > from the suppliers, their would be not much demand for such a open > source tool chain. There are other points where I would see more > motiviation and even there is not happening much: > - Good open source Verilog/VHDL editor (Yes, I have heard of > Emacs...) > as the integrated editors are average (Altera) or bad (Xilinx). > (Currently I am evaluating two commercial VHDL editors...) > - A kind of graphical editor for VHDL and Verilog as the top/higher
levels of bigger projects are often a pain IMHO (like writing netlists by hand). I would even start such a project myself if I had the time...
> > But even with such things where I think would be quite some demand, > the "critical mass" of the FPGA community is too low to get projects > started and especially keep them running. > > Thomas >
One big factor against an open source tool chain is that while the FPGA vendors describe in general terms the routing inside the devices, the precise details are not given, and I suspect that these details may be considered as part of the "secret sauce" that makes the device work. The devices have gotten so big and complicated, that it is impractical to use fully populated muxes, and how you chose what gets to what is important. Processors can also have little details like this, but for processors it tends to just affect the execution speed, and a compiler that doesn't take them into account can still do a reasonable job. For an FPGA, without ALL the details for this you can't even do the routing.