FPGARelated.com
Forums

Tiny CPUs for Slow Logic

Started by Unknown March 18, 2019
On Wednesday, March 20, 2019 at 6:41:55 AM UTC-4, already...@yahoo.com wrote:
> On Tuesday, March 19, 2019 at 10:07:38 PM UTC+2, Tom Gardner wrote: > > On 19/03/19 17:35, already5chosen@yahoo.com wrote: > > > On Tuesday, March 19, 2019 at 6:19:36 PM UTC+2, Tom Gardner wrote: > > >> > > >> The UK Parliament is an unmitigated dysfunctional mess. > > >> > > > > > > Do you prefer dysfunctional mesh ;) > > > > :) I'll settle for anything that /works/ predictably :( > > > > UK political system is completely off-topic in comp.arch.fpga. However I'd say that IMHO right now your parliament is facing unusually difficult problem on one hand, but at the same time it's not really "life or death" sort of the problem. Having troubles and appearing non-decisive in such situation is normal. It does not mean that the system is broken.
I was watching a video of a guy who bangs together Teslas from salvage cars. This one was about him actually buying a used Tesla from Tesla and the many trials and tribulations he had. He had traveled to a dealership over an hour drive away and they said they didn't have anything for him. At one point he says he is not going to get too wigged out over all this because it is a "first world problem". That gave me insight into my own issues realizing that what seems at first to me to be a major issue, is an issue that much of the world would LOVE to have. I'm wondering if Brexit is not one of those issues... I'm just sayin'... FPGA design is similar. Consider which of your issues are "first world" issues when you design. Rick C.
On Wednesday, March 20, 2019 at 6:53:07 AM UTC-4, Theo wrote:
> gnuarm.deletethisbit@gmail.com wrote: > > On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo Markettos wrote: > > > > > When people talk about things like "software running on such heterogeneous > > cores" it makes me think they don't really understand how this could be > > used. If you treat these small cores like logic elements, you don't have > > such lofty descriptions of "system software" since the software isn't > > created out of some global software package. Each core is designed to do > > a specific job just like any other piece of hardware and it has discrete > > inputs and outputs just like any other piece of hardware. If the hardware > > clock is not too fast, the software can synchronize with and literally > > function like hardware, but implementing more complex logic than the same > > area of FPGA fabric might. > > The point is that we need to understand what the whole system is doing. In > the XMOS case, we can look at a piece of software with N threads, running > across the cores provided on the chip. One piece of software, distributed > over the hardware resource available - the system is doing one thing. > > Your bottom-up approach means it's difficult to see the big picture of > what's going on. That means it's hard to understand the whole system, and > to program from a whole-system perspective.
I never mentioned a bottom up or a top down approach to design. Nothing about using these small CPUs is about the design "direction". I am pretty sure that you have to define the circuit they will work in before you can start designing the code.
> > Not sure what is hard to think about. It's a CPU, a small CPU with > > limited memory to implement small tasks that can do rather complex > > operations compared to a state machine really and includes memory, > > arithmetic and logic as well as I/O without having to write a single line > > of HDL. Only the actual app needs to be written. > > Here are the sematic descriptions of basic logic elements: > > LUT: q = f(x,y,z) > FF: q <= d_in (delay of one cycle) > BRAM: q = array[addr] > DSP: q = a*b + c > > A P&R tool can build a system out of these building blocks. It's notable > that the state-holding elements in this schema do nothing else except > holding state. That makes writing the tools easier (and we all know how > difficult the tools already are). In general, we don't tend to instantiate > these primitives manually but describe the higher level functions (eg a 64 > bit add) in HDL and allow the tools to select appropriate primitives for us > (eg a number of fast-adder blocks chained together). > > What's the logic equation of a processor?
Obviously it is like a combination of LUTs with FFs and able to implement any logic you wish including math. BTW, in many devices the elements are not at all so simple. Xilinx LUTs can be used as shift registers. There are additional logic within the logic blocks that allow math with carry chains, combining LUTs to form larger LUTs, breaking LUTs into smaller LUTs and lets not forget about routing which may not be used much anymore, not sure. So your simple world of four elements is really not so valid.
> It has state, but vastly more > state than the simplicity of a flipflop. What pattern does the P&R tool > need to match to infer a processor?
Why does it need to be inferred. If you want to write an HDL tool to turn HDL into processor code, have at it. But then there are other methods. Someone mentioned his MO is to use other tools for designing his algorithms and letting that tool generate the software for a processor or the HDL for an FPGA. That would seem easy enough to integrate.
> How is any verification tool going > to understand whether the processor with software is doing the right thing?
Huh? You can't simulate code on a processor???
> If your answer is 'we don't need verification tools, we program by hand' > then a) software has bugs, and automated verification is a handy way to > catch them, and b) you're never going to be writing hundreds of different > mini-programs to run on each core, let alone make them correct.
You seem to have left the roadway here. I'm lost.
> If we scale the processors up a bit, I could see the merits in say a bank > of, say, 32 Cortex M0s that could be interconnected as part of the FPGA > fabric and programmed in software for dedicated tasks (for instance, read > the I2C EEPROM on the DRAM DIMM and configure the DRAM controller at boot).
I don't follow your logic. What is different about the ARM processor from the stack processor other than that it is larger and slower and requires a royalty on each one? Are you talking about writing the code in C vs. what ever is used for the stack processor?
> But this is an SoC construct (built using SoC builder tools, and over which > the programmer has some purview although, as it turns out, sketchier than > you might think[1]). Such CPUs would likely be running bigger corpora of > software (for instance, the DRAM controller vendor's provided initialisation > code) which would likely be in C. But in this case we could just use a > soft-core today (the CPU ISA is most irrelevant for this application, so a > RISC-V/Microblaze/NIOS would be fine). > > [1] https://inf.ethz.ch/personal/troscoe/pubs/hotos15-gerber.pdf
The point of the many hard cores is the saving of resources. Soft cores would be the most wasteful way to implement logic. If the application is large enough they can implement things in software that aren't as practical in HDL, but that would be a different class of logic from the tiny CPUs I'm talking about.
> I can also see another niche, at the extreme bottom end, where a CPLD might > have one of your processors plus a few hundred logic cells. That's > essentially a microcontroller with FPGA, or an FPGA with microcontroller - > which some of the vendors already produce (although possibly not > small/cheap/low power enough). Here I can't see the advantages of using a > stack-based CPU versus paying a bit more to program in C. Although I don't > have experience in markets where the retail price of the product is $1, and so > every $0.001 matters. > > > > I would be interested to know what applications might use heterogenous > > > many-cores and what performance is achievable. > > > > Yes, clearly not getting the concept. Asking about heterogeneous > > performance is totally antithetical to this idea. > > You keep mentioning 700 MIPS, which suggests performance is important. If > these are simple state machine replacements, why do we care about > performance?
You lost me with the gear shift. The mention of instruction rate is about the CPU being fast enough to keep up with FPGA logic. The issue with "heterogeneous performance" is the "heterogeneous" part, lumping the many CPUs together to create some sort of number cruncher. That's not what this is about. Like in the GA144, I fully expect most CPUs to be sitting around most of the time idling, waiting for data. This is a good thing actually. These CPUs could consume significant current if they run at GHz all the time. I believe in the GA144 at that slower rate each processor can use around 2.5 mA. Not sure if a smaller process would use more or less power when running flat out. It's been too many years since I worked with those sorts of numbers.
> In essence, your proposal has a disconnect between the situations existing > FPGA blocks are used (implemented automatically by P&R tools) and the > situations software is currently used (human-driven software and > architectural design). It's unclear how you claim to bridge this gap.
I don't usually think of designing in those terms. If I want to design something, I design it. I ignore many tools only using the ones I find useful. In this case I would have no problem writing code for the processor and if needed, rolling into the FPGA simulation a model of the processor to run the code. In a professional implementation I would expect these models to be written for me in modules that run much faster than HDL so the simulation speed is not impacted. I certainly don't see how P&R tools would be a problem. They accommodate multipliers, DSP blocks, memory block and many, many special bits of assorted components inside the FPGAs which vary from vendor to vendor. Clock generators and distribution is pretty unique to each manufacturer. Lattice has all sorts of modules to offer like I2C and embedded Flash. Then there are entire CPUs embedded in FPGAs. Why would supporting them be so different from what I am talking about? Rick C.
On 20/03/2019 15:50, gnuarm.deletethisbit@gmail.com wrote:
> On Wednesday, March 20, 2019 at 6:14:21 AM UTC-4, David Brown wrote: >> On 20/03/2019 03:30, gnuarm.deletethisbit@gmail.com wrote: >>> On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo Markettos >>> wrote: >>>> Tom Gardner <spamjunk@blueyonder.co.uk> wrote: >>>>> Understand XMOS's xCORE processors and xC language, see how >>>>> they complement and support each other. I found the net >>>>> result stunningly easy to get working first time, without >>>>> having to continually read obscure errata! >>>> >>>> I can see the merits of the XMOS approach. But I'm unclear >>>> how this relates to the OP's proposal, which (I think) is >>>> having tiny CPUs as hard logic blocks on an FPGA, like DSP >>>> blocks. >>>> >>>> I completely understand the problem of running out of hardware >>>> threads, so a means of 'just add another one' is handy. But >>>> the issue is how to combine such things with other synthesised >>>> logic. >>>> >>>> The XMOS approach is fine when the hardware is uniform and the >>>> software sits on top, but when the hardware is synthesised and >>>> the 'CPUs' sit as pieces in a fabric containing random logic >>>> (as I think the OP is suggesting) it becomes a lot harder to >>>> reason about what the system is doing and what the software >>>> running on such heterogeneous cores should look like. Only the >>>> FPGA tools have a full view of what the system looks like, and >>>> it seems stretching them to have them also generate software to >>>> run on these cores. >>> >>> When people talk about things like "software running on such >>> heterogeneous cores" it makes me think they don't really >>> understand how this could be used. If you treat these small >>> cores like logic elements, you don't have such lofty descriptions >>> of "system software" since the software isn't created out of some >>> global software package. Each core is designed to do a specific >>> job just like any other piece of hardware and it has discrete >>> inputs and outputs just like any other piece of hardware. If the >>> hardware clock is not too fast, the software can synchronize with >>> and literally function like hardware, but implementing more >>> complex logic than the same area of FPGA fabric might. >>> >> >> That is software. >> >> If you want to try to get cycle-precise control of the software and >> use that precision for direct hardware interfacing, you are almost >> certainly going to have a poor, inefficient and difficult design. >> It doesn't matter if you say "think of it like logic" - it is /not/ >> logic, it is software, and you don't use that for cycle-precise >> control. You use when you need flexibility, calculations, and >> decisions. > > I suppose you can make anything difficult if you try hard enough. >
Equally, you can make anything sound simple if you are vague enough and wave your hands around.
> The point is you don't have to make it difficult by talking about > "software running on such heterogeneous cores". Just talk about it > being a small hunk of software that is doing a specific job. Then > the mystery is gone and the task can be made as easy as the task is. >
I did not use the phrase "software running on such heterogeneous cores" - and I am not trying to make anything difficult. You are making cpu cores. They run software. Saying they are "like logic elements" or "they connect directly to hardware" does not make it so - and it does not mean that what they run is not software.
> > In VHDL this would be a process(). VHDL programs are typically chock > full of processes and no one wrings their hands worrying about how > they will design the "software running on such heterogeneous cores". > > > BTW, VHDL is software too.
I agree that VHDL is software. And yes, there are usually processes in VHDL designs. I am not /worrying/ about these devices running software - I am simply saying that they /will/ be running software. I can't comprehend why you want to deny that. It seems that you are frightened of software or programmers, and want to call it anything /but/ software. If the software a core is running is simple enough to be described in VHDL, then it should be a VHDL process - not software in a cpu core. If it is too complex for that, it is going to have to be programmed separately in an appropriate language. That is not necessarily harder or easier than VHDL design - it is just different. If you try to force the software to be synchronous with timing on the hardware, /then/ you are going to be in big difficulties. So don't do that - use hardware for the tightest timing, and software for the bits that software is good for.
> >>> There is no need to think about how the CPUs would communicate >>> unless there is a specific need for them to do so. The F18A uses >>> a handshaked parallel port in their design. They seem to have >>> done a pretty slick job of it and can actually hang the processor >>> waiting for the acknowledgement saving power and getting an >>> instantaneous wake up following the handshake. This can be used >>> with other CPUs or >>> >> >> Fair enough. > > Ok, that's a start. >
I'd expect that the sensible way to pass data between these, if you need to do so much, is using FIFO's.
On Wednesday, March 20, 2019 at 6:56:51 AM UTC-4, already...@yahoo.com wrote:
> On Tuesday, March 19, 2019 at 10:07:38 PM UTC+2, Tom Gardner wrote: > > On 19/03/19 17:35, already5chosen@yahoo.com wrote: > > > On Tuesday, March 19, 2019 at 6:19:36 PM UTC+2, Tom Gardner wrote: > > >> The "granularity" of the computation and communication will be a key to > > >> understanding what the OP is thinking. > > > > > > I don't know what Rick had in mind. I personally would go for one "hard-CPU" > > > block per 4000-5000 6-input logic elements (i.e. Altera ALMs or Xilinx CLBs). > > > Each block could be configured either as one 64-bit core or pair of 32-bit > > > cores. The bock would contains hard instruction decoders/ALUs/shifters and > > > hard register files. It can optionally borrow adjacent DSP blocks for > > > multipliers. Adjacent embedded memory blocks can be used for data memory. > > > Code memory should be a bit more flexible giving to designer a choice between > > > embedded memory blocks or distributed memory (X)/MLABs(A). > > > > It would be interesting to find an application level > > description (i.e. language constructs) that > > - could be automatically mapped onto those primitives > > by a toolset > > - was useful for more than a niche subset of applications > > - was significantly better than existing tools > > > > I wouldn't hold my breath :) > > > I think, you are looking at it from wrong angle. > One doesn't really need new tools to design and simulate such things. What's needed is a combinations of existing tools - compilers, assemblers, probably software simulator plug-ins into existing HDL simulators, but the later is just luxury for speeding up simulations, in principle, feeding HDL simulator with RTL model of the CPU core will work too.
I agree, but I think it will be very useful to have a proper model of the CPUs for faster simulations. If it were one CPU it's different. But using 100 CPUs would very likely make simulation a real chore without a fast model.
> As to niches, all "hard" blocks that we currently have in FPGAs are about niches. It's extremely rare that user's design uses all or majority of the features of given FPGA device and need LUTs, embedded memories, PLLs, multiplies, SERDESs, DDR DRAM I/O blocks etc in exact amounts appearing in the device.
This is exactly the reason why FPGA companies resisted even incorporating block RAM initially. I recall conversations with Xilinx representatives about these issues here. It was indicated that the cost of the added silicon was significant and they would be "seldom" used. Now many people would not buy an FPGA without multipliers and/or DSP blocks. This is really just another step in the same direction.
> It still makes sense, economically, to have them all built in, because masks and other NREs are mighty expensive while silicon itself is relatively cheap. Multiple small hard CPU cores are really not very different from features, mentioned above.
I don't know the details of costs for FPGAs. What I do know is that the CPUs I am talking about would use the silicon area of a rather few logic blocks. The reference design I use is in a 180 nm process and is an eighth of a square mm. With an 18 nm process the die area would be 1,260 sq um. That's not very big. 100 of them would occupy 0.126 sq mm. If they have much use, that's a pretty small die area. For comparison, an XC7A200T has a die area of about 132 sq mm and 33,000 slices for an area of 3,923 sq um per. Of course this is loaded with overhead which is likely more than half the area, but it gives you some perspective about the cost of adding these CPUs... very, very little, around the die area of a single slice. It also gives you an idea of how large the FPGA logic functions have grown. Rick C.
On 20/03/19 14:51, already5chosen@yahoo.com wrote:
> On Wednesday, March 20, 2019 at 4:31:27 PM UTC+2, Tom Gardner wrote: >> On 20/03/19 14:11, already5chosen@yahoo.com wrote: >>> On Wednesday, March 20, 2019 at 3:37:17 PM UTC+2, Tom Gardner wrote: >>>> >>>> But more difficult that creating such a toolset is defining an application >>>> level description that a toolset can munge. >>>> >>>> So, define (initially by example, later more formally) inputs to the >>>> toolset and outputs from it. Then we can judge whether the concepts are >>>> more than handwaving wishes. >>>> >>> >>> I don't understand what you are asking for. >> >> Go back and read the parts of my post that you chose to snip. >> >> Give a handwaving indication of the concepts that avoid the >> conceptual problems that I mentioned. > > Frankly, it starts to sound like you never used soft CPU cores in your designs. > So, for somebody like myself, who uses them routinely for different tasks since 2006, you are really not easy to understand.
Professionally, since 1978 I've done everything from low noise analogue electronics, many hardware-software systems using all sorts of technologies, networking at all levels of the protocol stack, "up" to high availability distributed soft real-time systems. And almost all of that has been on the bleeding edge. So, yes, I do have more than a passing acquaintance with the characteristics of many hardware and software technologies, and where partitions between them can, should and should not be drawn.
> Concept? Concepts are good for new things, not for something that is a variation of something old and routine and obviously working.
Whatever is being proposed, is it old or new? If old then the OP needs enlightenment and concrete examples can easily be noted. If new, then provide the concepts.
>> Or better still, get the OP to do it. >> > > With that part I agree.
On 20/03/19 15:30, David Brown wrote:
> If the software a core is running is simple enough to be described in > VHDL, then it should be a VHDL process - not software in a cpu core. If > it is too complex for that, it is going to have to be programmed > separately in an appropriate language. That is not necessarily harder > or easier than VHDL design - it is just different.
Precisely.
> If you try to force the software to be synchronous with timing on the > hardware, /then/ you are going to be in big difficulties. So don't do > that - use hardware for the tightest timing, and software for the bits > that software is good for.
Precisely.
>>>> There is no need to think about how the CPUs would communicate >>>> unless there is a specific need for them to do so. The F18A uses >>>> a handshaked parallel port in their design. They seem to have >>>> done a pretty slick job of it and can actually hang the processor >>>> waiting for the acknowledgement saving power and getting an >>>> instantaneous wake up following the handshake. This can be used >>>> with other CPUs or >>>> >>> >>> Fair enough. >> >> Ok, that's a start. >> > > I'd expect that the sensible way to pass data between these, if you need > to do so much, is using FIFO's.
And that raises the question of the "comms protocols" or "programming model" between each side, e.g. rendezvous, FIFO depth, blocking, non-blocking, timeouts etc
On Wednesday, March 20, 2019 at 11:30:15 AM UTC-4, David Brown wrote:
> On 20/03/2019 15:50, gnuarm.deletethisbit@gmail.com wrote: > > On Wednesday, March 20, 2019 at 6:14:21 AM UTC-4, David Brown wrote: > >> On 20/03/2019 03:30, gnuarm.deletethisbit@gmail.com wrote: > >>> On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo Markettos > >>> wrote: > >>>> Tom Gardner <spamjunk@blueyonder.co.uk> wrote: > >>>>> Understand XMOS's xCORE processors and xC language, see how > >>>>> they complement and support each other. I found the net > >>>>> result stunningly easy to get working first time, without > >>>>> having to continually read obscure errata! > >>>> > >>>> I can see the merits of the XMOS approach. But I'm unclear > >>>> how this relates to the OP's proposal, which (I think) is > >>>> having tiny CPUs as hard logic blocks on an FPGA, like DSP > >>>> blocks. > >>>> > >>>> I completely understand the problem of running out of hardware > >>>> threads, so a means of 'just add another one' is handy. But > >>>> the issue is how to combine such things with other synthesised > >>>> logic. > >>>> > >>>> The XMOS approach is fine when the hardware is uniform and the > >>>> software sits on top, but when the hardware is synthesised and > >>>> the 'CPUs' sit as pieces in a fabric containing random logic > >>>> (as I think the OP is suggesting) it becomes a lot harder to > >>>> reason about what the system is doing and what the software > >>>> running on such heterogeneous cores should look like. Only the > >>>> FPGA tools have a full view of what the system looks like, and > >>>> it seems stretching them to have them also generate software to > >>>> run on these cores. > >>> > >>> When people talk about things like "software running on such > >>> heterogeneous cores" it makes me think they don't really > >>> understand how this could be used. If you treat these small > >>> cores like logic elements, you don't have such lofty descriptions > >>> of "system software" since the software isn't created out of some > >>> global software package. Each core is designed to do a specific > >>> job just like any other piece of hardware and it has discrete > >>> inputs and outputs just like any other piece of hardware. If the > >>> hardware clock is not too fast, the software can synchronize with > >>> and literally function like hardware, but implementing more > >>> complex logic than the same area of FPGA fabric might. > >>> > >> > >> That is software. > >> > >> If you want to try to get cycle-precise control of the software and > >> use that precision for direct hardware interfacing, you are almost > >> certainly going to have a poor, inefficient and difficult design. > >> It doesn't matter if you say "think of it like logic" - it is /not/ > >> logic, it is software, and you don't use that for cycle-precise > >> control. You use when you need flexibility, calculations, and > >> decisions. > > > > I suppose you can make anything difficult if you try hard enough. > > > > Equally, you can make anything sound simple if you are vague enough and > wave your hands around.
Not trying to make it sound "simple". Just saying it can be useful and not the same as designing a chip with many CPUs for the purpose of providing lots of MIPS to crunch numbers. Those ideas and methods don't apply here.
> > The point is you don't have to make it difficult by talking about > > "software running on such heterogeneous cores". Just talk about it > > being a small hunk of software that is doing a specific job. Then > > the mystery is gone and the task can be made as easy as the task is. > > > > I did not use the phrase "software running on such heterogeneous cores" > - and I am not trying to make anything difficult. You are making cpu > cores. They run software. Saying they are "like logic elements" or > "they connect directly to hardware" does not make it so - and it does > not mean that what they run is not software.
You don't need to complicate the design by applying all the limitations of multi-processing when this is NOT at all the same. I call them logic elements because that is the intent, for them to implement logic. Yes, it is software, but that in itself creates no problems I am aware of. As to the connection, I really don't get your point. They either connect directly to the hardware because that's how they are designed, or they don't... because that's how they are designed. I don't know what you are saying about that.
> > In VHDL this would be a process(). VHDL programs are typically chock > > full of processes and no one wrings their hands worrying about how > > they will design the "software running on such heterogeneous cores". > > > > > > BTW, VHDL is software too. > > I agree that VHDL is software. And yes, there are usually processes in > VHDL designs. > > I am not /worrying/ about these devices running software - I am simply > saying that they /will/ be running software. I can't comprehend why you > want to deny that.
Enough! The CPUs run software. Now, what is YOUR point?
> It seems that you are frightened of software or > programmers, and want to call it anything /but/ software. > > If the software a core is running is simple enough to be described in > VHDL, then it should be a VHDL process - not software in a cpu core.
Ok, now you have crossed into a philosophical domain. If you want to think in these terms I won't dissuade you, but it has no meaning in digital design and I won't discuss it further.
> If > it is too complex for that, it is going to have to be programmed > separately in an appropriate language. That is not necessarily harder > or easier than VHDL design - it is just different.
Ok, so what?
> If you try to force the software to be synchronous with timing on the > hardware, /then/ you are going to be in big difficulties. So don't do > that - use hardware for the tightest timing, and software for the bits > that software is good for.
LOL! You are thinking in terms that are very obsolete. Read about how the F18A synchronizes with other processors and you will find that this is an excellent way to interface to the hardware as well. Just like logic, when the CPU hand shakes with a logic clock, it only has to meet the timing of a clock cycle, just like all the logic in the same design. In a VHDL process the steps are written out in sequence and not assumed to be running in parallel, just like software. When the process reaches a point of synchronization it will halt, just like logic.
> >>> There is no need to think about how the CPUs would communicate > >>> unless there is a specific need for them to do so. The F18A uses > >>> a handshaked parallel port in their design. They seem to have > >>> done a pretty slick job of it and can actually hang the processor > >>> waiting for the acknowledgement saving power and getting an > >>> instantaneous wake up following the handshake. This can be used > >>> with other CPUs or > >>> > >> > >> Fair enough. > > > > Ok, that's a start. > > > > I'd expect that the sensible way to pass data between these, if you need > to do so much, is using FIFO's.
Between what exactly??? You are designing a system that is not before you. More importantly you don't actually know anything about the ideas used in the F18A and GA144 designs. I'm not trying to be rude, but you should learn more about them before you assume they need to work like every other processor you've ever used. The F18A and GA144 really only have two particularly unique ideas. One is that the processor is very, very small and as a consequence, fast. The other is the communications technique. Charles Moore is a unique thinker and he realized that with the advance of processing technology CPUs could be made very small and so become MIPS fodder. By that I mean you no longer need to focus on utilizing all the MIPS in a CPU. Instead, they can be treated as disposable and only a tiny fraction of the available MIPS used to implement some function... usefully. While the GA144 is a commercial failure for many reasons, it does illustrate some very innovative ideas and is what prompted me to consider what happens when you can scatter CPUs around an FPGA as if they were logic blocks. No, I don't have a fully developed "business plan". I am just interested in exploring the idea. Moore's (Green Array's actually, CM isn't actively working with them at this point I believe) chip isn't very practical because Moore isn't terribly interested in being practical exactly. But that isn't to say it doesn't embody some very interesting ideas. Rick C.
On Wednesday, March 20, 2019 at 5:51:21 PM UTC+2, Tom Gardner wrote:
> On 20/03/19 14:51, already5chosen@yahoo.com wrote: > > On Wednesday, March 20, 2019 at 4:31:27 PM UTC+2, Tom Gardner wrote: > >> On 20/03/19 14:11, already5chosen@yahoo.com wrote: > >>> On Wednesday, March 20, 2019 at 3:37:17 PM UTC+2, Tom Gardner wrote: > >>>> > >>>> But more difficult that creating such a toolset is defining an application > >>>> level description that a toolset can munge. > >>>> > >>>> So, define (initially by example, later more formally) inputs to the > >>>> toolset and outputs from it. Then we can judge whether the concepts are > >>>> more than handwaving wishes. > >>>> > >>> > >>> I don't understand what you are asking for. > >> > >> Go back and read the parts of my post that you chose to snip. > >> > >> Give a handwaving indication of the concepts that avoid the > >> conceptual problems that I mentioned. > > > > Frankly, it starts to sound like you never used soft CPU cores in your designs. > > So, for somebody like myself, who uses them routinely for different tasks since 2006, you are really not easy to understand. > > Professionally, since 1978 I've done everything from low noise > analogue electronics, many hardware-software systems using > all sorts of technologies, networking at all levels of the > protocol stack, "up" to high availability distributed soft > real-time systems. > > And almost all of that has been on the bleeding edge. > > So, yes, I do have more than a passing acquaintance with > the characteristics of many hardware and software technologies, > and where partitions between them can, should and should not > be drawn. >
Is it sort of admission that you indeed never designed with soft cores?
> > > Concept? Concepts are good for new things, not for something that is a variation of something old and routine and obviously working. > > Whatever is being proposed, is it old or new? > > If old then the OP needs enlightenment and concrete > examples can easily be noted. > > If new, then provide the concepts. >
It is a new variation of of old concept. A cross between PPCs in ancient VirtexPro and soft cores virtually everywhere in more modern times. Probably, best characterized by what is not alike: it is not alike Xilinx Zynq or Altera Cyclone5-HPS. "New" part comes more from new economics of sub-20nm processes than from abstractions that you try to draf into it. NRE is more and more expensive, gates are more and more cheap (Well, the cost of gates started to stagnate in last couple of years, but that does not matter. What's matter is that at something like TSMC 12nm gate are already quite cheap). So, adding multiple small CPU cores that could be used as replacement for multiple soft CPU cores that people already used to use today, now starts to make sense. May be, it's not a really good proposition, but at these silicon geometries it can't be written out as obviously stupid proposition. It appears that I don't agree with Rick about "how small is small" and respectively about how many of them should be placed on die, but we probably agree about percentage of the area of FPGA that intuitively seem worth to allocate for such feature - more than 1% but less than 5%. Also he appears to like stack-based ISAs while I lean toward more conventional 32-bit or 32/64-bit RISC, or, may be, even toward modern CISC akin to Renesas RX, but those are relatively minor details.
> > >> Or better still, get the OP to do it. > >> > > > > With that part I agree.
On 20/03/2019 17:30, gnuarm.deletethisbit@gmail.com wrote:
> On Wednesday, March 20, 2019 at 11:30:15 AM UTC-4, David Brown > wrote: >> On 20/03/2019 15:50, gnuarm.deletethisbit@gmail.com wrote: >>> On Wednesday, March 20, 2019 at 6:14:21 AM UTC-4, David Brown >>> wrote: >>>> On 20/03/2019 03:30, gnuarm.deletethisbit@gmail.com wrote: >>>>> On Tuesday, March 19, 2019 at 10:29:07 AM UTC-4, Theo >>>>> Markettos wrote: >>>>>> Tom Gardner <spamjunk@blueyonder.co.uk> wrote: >>>>>>> Understand XMOS's xCORE processors and xC language, see >>>>>>> how they complement and support each other. I found the >>>>>>> net result stunningly easy to get working first time, >>>>>>> without having to continually read obscure errata! >>>>>> >>>>>> I can see the merits of the XMOS approach. But I'm >>>>>> unclear how this relates to the OP's proposal, which (I >>>>>> think) is having tiny CPUs as hard logic blocks on an FPGA, >>>>>> like DSP blocks. >>>>>> >>>>>> I completely understand the problem of running out of >>>>>> hardware threads, so a means of 'just add another one' is >>>>>> handy. But the issue is how to combine such things with >>>>>> other synthesised logic. >>>>>> >>>>>> The XMOS approach is fine when the hardware is uniform and >>>>>> the software sits on top, but when the hardware is >>>>>> synthesised and the 'CPUs' sit as pieces in a fabric >>>>>> containing random logic (as I think the OP is suggesting) >>>>>> it becomes a lot harder to reason about what the system is >>>>>> doing and what the software running on such heterogeneous >>>>>> cores should look like. Only the FPGA tools have a full >>>>>> view of what the system looks like, and it seems stretching >>>>>> them to have them also generate software to run on these >>>>>> cores. >>>>> >>>>> When people talk about things like "software running on such >>>>> heterogeneous cores" it makes me think they don't really >>>>> understand how this could be used. If you treat these small >>>>> cores like logic elements, you don't have such lofty >>>>> descriptions of "system software" since the software isn't >>>>> created out of some global software package. Each core is >>>>> designed to do a specific job just like any other piece of >>>>> hardware and it has discrete inputs and outputs just like any >>>>> other piece of hardware. If the hardware clock is not too >>>>> fast, the software can synchronize with and literally >>>>> function like hardware, but implementing more complex logic >>>>> than the same area of FPGA fabric might. >>>>> >>>> >>>> That is software. >>>> >>>> If you want to try to get cycle-precise control of the software >>>> and use that precision for direct hardware interfacing, you are >>>> almost certainly going to have a poor, inefficient and >>>> difficult design. It doesn't matter if you say "think of it >>>> like logic" - it is /not/ logic, it is software, and you don't >>>> use that for cycle-precise control. You use when you need >>>> flexibility, calculations, and decisions. >>> >>> I suppose you can make anything difficult if you try hard >>> enough. >>> >> >> Equally, you can make anything sound simple if you are vague enough >> and wave your hands around. > > Not trying to make it sound "simple". Just saying it can be useful > and not the same as designing a chip with many CPUs for the purpose > of providing lots of MIPS to crunch numbers. Those ideas and methods > don't apply here.
Fair enough. I have not suggested it was like using lots of CPUs for number crunching. (That is not what I would think the GA144 is good for either.)
> > >>> The point is you don't have to make it difficult by talking >>> about "software running on such heterogeneous cores". Just talk >>> about it being a small hunk of software that is doing a specific >>> job. Then the mystery is gone and the task can be made as easy >>> as the task is. >>> >> >> I did not use the phrase "software running on such heterogeneous >> cores" - and I am not trying to make anything difficult. You are >> making cpu cores. They run software. Saying they are "like logic >> elements" or "they connect directly to hardware" does not make it >> so - and it does not mean that what they run is not software. > > You don't need to complicate the design by applying all the > limitations of multi-processing when this is NOT at all the same. I > call them logic elements because that is the intent, for them to > implement logic. Yes, it is software, but that in itself creates no > problems I am aware of. >
I agree that software should not in itself create a problem. Trying to think of them as "logic" /would/ create problems. Think of them as software, and program them as software. I expect you'd think of them as entirely independent units with independent programs, rather than as a multi-cpu or heterogeneous system.
> As to the connection, I really don't get your point. They either > connect directly to the hardware because that's how they are > designed, or they don't... because that's how they are designed. I > don't know what you are saying about that. >
"Synchronise directly with hardware" might be a better phrase.
> >>> In VHDL this would be a process(). VHDL programs are typically >>> chock full of processes and no one wrings their hands worrying >>> about how they will design the "software running on such >>> heterogeneous cores". >>> >>> >>> BTW, VHDL is software too. >> >> I agree that VHDL is software. And yes, there are usually >> processes in VHDL designs. >> >> I am not /worrying/ about these devices running software - I am >> simply saying that they /will/ be running software. I can't >> comprehend why you want to deny that. > > Enough! The CPUs run software. Now, what is YOUR point? >
My point was that these are not logic, they are not logic elements (even if they could be physically small and cheap and scattered around a chip like logic elements). Thinking about them as "sequential logic elements" is not helpful. Think of them as small processors running simple and limited /software/. Unless you can find a way to automatically generate code for them, then they will be programmed using a /software/ programming language, not a logic or hardware programming language. If you are happy to accept that now, then great - we can move on.
> >> It seems that you are frightened of software or programmers, and >> want to call it anything /but/ software. >> >> If the software a core is running is simple enough to be described >> in VHDL, then it should be a VHDL process - not software in a cpu >> core. > > Ok, now you have crossed into a philosophical domain. If you want to > think in these terms I won't dissuade you, but it has no meaning in > digital design and I won't discuss it further. > > >> If it is too complex for that, it is going to have to be >> programmed separately in an appropriate language. That is not >> necessarily harder or easier than VHDL design - it is just >> different. > > Ok, so what? > > >> If you try to force the software to be synchronous with timing on >> the hardware, /then/ you are going to be in big difficulties. So >> don't do that - use hardware for the tightest timing, and software >> for the bits that software is good for. > > LOL! You are thinking in terms that are very obsolete. Read about > how the F18A synchronizes with other processors and you will find > that this is an excellent way to interface to the hardware as well. > Just like logic, when the CPU hand shakes with a logic clock, it only > has to meet the timing of a clock cycle, just like all the logic in > the same design.
That is not using software for synchronising with hardware (or other cpus) - it is using hardware. When a processor's software has a loop waiting for an input signal to go low, then it reads a byte input, then it waits for the first signal to go high again - that is using software for synchronisation. That's okay for slow interfacing. When it waits for one signal, then uses three NOP's before setting another signal to get the timing right, that is using software for accurate timing - a very fragile solution. When it is reading from a register that is latched by an external enable signal, it is using hardware for the interfacing and synchronisation. When the cpu has signals that can pause its execution at the right steps in handshaking, it is using hardware synchronisation. That is, of course, absolutely fine - that is using the right tools for the right jobs.
> In a VHDL process the steps are written out in > sequence and not assumed to be running in parallel, just like > software. When the process reaches a point of synchronization it > will halt, just like logic. >
You use VHDL processes for cycle-precise, simple sequences. You use software on a processor for less precise, complex sequences.
> >>>>> There is no need to think about how the CPUs would >>>>> communicate unless there is a specific need for them to do >>>>> so. The F18A uses a handshaked parallel port in their >>>>> design. They seem to have done a pretty slick job of it and >>>>> can actually hang the processor waiting for the >>>>> acknowledgement saving power and getting an instantaneous >>>>> wake up following the handshake. This can be used with other >>>>> CPUs or >>>>> >>>> >>>> Fair enough. >>> >>> Ok, that's a start. >>> >> >> I'd expect that the sensible way to pass data between these, if you >> need to do so much, is using FIFO's. > > Between what exactly??? You are designing a system that is not > before you. More importantly you don't actually know anything about > the ideas used in the F18A and GA144 designs.
Between whatever you want as you pass data around your chip.
> > I'm not trying to be rude, but you should learn more about them > before you assume they need to work like every other processor you've > ever used. The F18A and GA144 really only have two particularly > unique ideas. One is that the processor is very, very small and as a > consequence, fast. The other is the communications technique.
Communication between the nodes is with a synchronising port. A write to the port blocks until the receiving node does a read - similarly, a read blocks until the sending node does a write. Hardware synchronisation, not software, and not entirely unlike an absolutely minimal blocking FIFO. It is an interesting idea, though somewhat limiting.
> > Charles Moore is a unique thinker and he realized that with the > advance of processing technology CPUs could be made very small and so > become MIPS fodder. By that I mean you no longer need to focus on > utilizing all the MIPS in a CPU. Instead, they can be treated as > disposable and only a tiny fraction of the available MIPS used to > implement some function... usefully. > > While the GA144 is a commercial failure for many reasons, it does > illustrate some very innovative ideas and is what prompted me to > consider what happens when you can scatter CPUs around an FPGA as if > they were logic blocks.
As I said before, it is a very interesting and impressive concept, with a lot of cool ideas - despite being a commercial failure. I think one of the biggest reasons for its failure is that it is a technologically interesting solution, but with no matching problems - there is no killer app for it. When combined with a significant learning curve and development challenge compared to alternative established solutions. I want to know if that is going to happen with your ideas here. Sure, you don't have a full business plan - but do you at least have thoughts about the kind of usage where these mini cpus would be a technologically superior choice compared to using state machines in VHDL (possibly generated with external programs), sequential logic generators (like C to HDL compilers, matlab tools, etc.), normal soft processors, or normal hard processors? Give me a /reason/ to all this - rather than just saying you can make a simple stack-based cpu that's very small, so you could have lots of them on a chip.
> > No, I don't have a fully developed "business plan". I am just > interested in exploring the idea. Moore's (Green Array's actually, > CM isn't actively working with them at this point I believe) chip > isn't very practical because Moore isn't terribly interested in being > practical exactly. But that isn't to say it doesn't embody some very > interesting ideas. > > Rick C. >
On Wednesday, March 20, 2019 at 5:38:16 PM UTC-4, David Brown wrote:
> > I agree that software should not in itself create a problem. Trying to > think of them as "logic" /would/ create problems. Think of them as > software, and program them as software. I expect you'd think of them as > entirely independent units with independent programs, rather than as a > multi-cpu or heterogeneous system.
Ok, please tell me what those problems would be. I have no idea what you mean by what you say. You are likely reading a lot into this that I am not intending.
> > As to the connection, I really don't get your point. They either > > connect directly to the hardware because that's how they are > > designed, or they don't... because that's how they are designed. I > > don't know what you are saying about that. > > > > "Synchronise directly with hardware" might be a better phrase.
I don't know why and likely I'm' not going to care. I think you need to learn more of how the F18A works.
> > Enough! The CPUs run software. Now, what is YOUR point? > > > > My point was that these are not logic, they are not logic elements (even > if they could be physically small and cheap and scattered around a chip > like logic elements). Thinking about them as "sequential logic > elements" is not helpful. Think of them as small processors running > simple and limited /software/. Unless you can find a way to > automatically generate code for them, then they will be programmed using > a /software/ programming language, not a logic or hardware programming > language. If you are happy to accept that now, then great - we can move on.
You have it backwards. Please show me what you think the problems are. I don't care if they run software or have a Maxwell demon tossing bits about as long as it does what I need. You seem to get hung up on terminology so easily.
> > LOL! You are thinking in terms that are very obsolete. Read about > > how the F18A synchronizes with other processors and you will find > > that this is an excellent way to interface to the hardware as well. > > Just like logic, when the CPU hand shakes with a logic clock, it only > > has to meet the timing of a clock cycle, just like all the logic in > > the same design. > > That is not using software for synchronising with hardware (or other > cpus) - it is using hardware.
So??? You are the one who keeps talking about software/hardware whatever. I'm talking about the software being able to synchronize with the clock of the other hardware. When that happens there are tight timing constraints in the same sense of the software sampling an ADC on a periodic basis and having to process the resulting data before the next sample is ready. The only difference is something like the F18A running at a few GHz can do a lot in a 10 ns clock cycle.
> When a processor's software has a loop waiting for an input signal to go > low, then it reads a byte input, then it waits for the first signal to > go high again - that is using software for synchronisation. That's okay > for slow interfacing. When it waits for one signal, then uses three > NOP's before setting another signal to get the timing right, that is > using software for accurate timing - a very fragile solution.
That is your construct because you know nothing of how the F18A works. As I've mentioned before, you would do well to read some of the app notes on this device. It really does have some good ideas to offer.
> When it is reading from a register that is latched by an external enable > signal, it is using hardware for the interfacing and synchronisation. > When the cpu has signals that can pause its execution at the right steps > in handshaking, it is using hardware synchronisation. That is, of > course, absolutely fine - that is using the right tools for the right jobs.
Duh!
> > In a VHDL process the steps are written out in > > sequence and not assumed to be running in parallel, just like > > software. When the process reaches a point of synchronization it > > will halt, just like logic. > > > > You use VHDL processes for cycle-precise, simple sequences. You use > software on a processor for less precise, complex sequences.
You are making arbitrary distinctions. The point is that if these CPUs are available they can be used to implement significant sections of logic in less space on the die than in the FPGA fabric.
> Between whatever you want as you pass data around your chip.
FIFOs are used for specific purposes. Not every interface needs them. Your suggestion that they should be used without an understand of why is pretty pointless.
> > I'm not trying to be rude, but you should learn more about them > > before you assume they need to work like every other processor you've > > ever used. The F18A and GA144 really only have two particularly > > unique ideas. One is that the processor is very, very small and as a > > consequence, fast. The other is the communications technique. > > Communication between the nodes is with a synchronising port. A write > to the port blocks until the receiving node does a read - similarly, a > read blocks until the sending node does a write. Hardware > synchronisation, not software, and not entirely unlike an absolutely > minimal blocking FIFO. It is an interesting idea, though somewhat limiting.
Oh, what are the limitations? Also be aware that the blocking doesn't need to work as you describe it. Mostly the block would be on the read side, a processor would block until the data it needs is available... or a clock signal transitions to indicate the data that has been calculated can be output... just like other logic the LUT/FF logic blocks of an FPGA.
> > Charles Moore is a unique thinker and he realized that with the > > advance of processing technology CPUs could be made very small and so > > become MIPS fodder. By that I mean you no longer need to focus on > > utilizing all the MIPS in a CPU. Instead, they can be treated as > > disposable and only a tiny fraction of the available MIPS used to > > implement some function... usefully. > > > > While the GA144 is a commercial failure for many reasons, it does > > illustrate some very innovative ideas and is what prompted me to > > consider what happens when you can scatter CPUs around an FPGA as if > > they were logic blocks. > > As I said before, it is a very interesting and impressive concept, with > a lot of cool ideas - despite being a commercial failure. > > I think one of the biggest reasons for its failure is that it is a > technologically interesting solution, but with no matching problems - > there is no killer app for it. When combined with a significant > learning curve and development challenge compared to alternative > established solutions.
Saying there is no killer app is rather the result than the problem. Yes, it was designed out of the idea of "what happens when I inter-connect a bunch of these processors?" without considering a lot of the real world design needs. The chip has limited RAM which could have been included in some way even if not on each processor. There is no Flash, which again could have been included. The I/Os are all 1.8 volts. There was no real memory interface provided, rather a DRAM interface was emulated in firmware and actually doesn't work, so one had to be written for static RAM which is hard to come by these days. I don't recall the full list. But this is not about the GA144.
> I want to know if that is going to happen with your ideas here. Sure, > you don't have a full business plan - but do you at least have thoughts > about the kind of usage where these mini cpus would be a technologically > superior choice compared to using state machines in VHDL (possibly > generated with external programs), sequential logic generators (like C > to HDL compilers, matlab tools, etc.), normal soft processors, or normal > hard processors?
The point wasn't that I don't have a business plan. The point was that I haven't given this as much thought as would have been done if I were working on a business plan. I'm kicking around an idea. I'm not in a position to create FPGA with or without small CPUs.
> Give me a /reason/ to all this - rather than just saying you can make a > simple stack-based cpu that's very small, so you could have lots of them > on a chip.
Why? Why don't you give ME a reason? Why don't you switch your point of view and figure out how this would be useful? Neither of us have anything to gain or lose.
> > No, I don't have a fully developed "business plan". I am just > > interested in exploring the idea. Moore's (Green Array's actually, > > CM isn't actively working with them at this point I believe) chip > > isn't very practical because Moore isn't terribly interested in being > > practical exactly. But that isn't to say it doesn't embody some very > > interesting ideas.
Rick C.