FPGARelated.com
Forums

Is it possible to implement Ethernet on bare metal FPGA, Without Use of any Hard or Soft core processor?

Started by Swapnil Patil February 4, 2019
On Monday, February 4, 2019 at 11:30:33 PM UTC-5, A.P.Richelieu wrote:
> Den 2019-02-04 kl. 07:29, skrev Swapnil Patil: > > Hello folks, > > > > Let's say I have Spartan 6 board only and i wanted to implement Ethernet communication.So how can it be done? > > > > I don't want to connect any Hard or Soft core processor. > > also I have looked into WIZnet W5300 Ethernet controller interfacing to spartan 6, but I don't want to connect any such controller just spartan 6. > > So how can it be done? > > > > It is not necessary to use spartan 6 board only.If it possible to workout with any another boards I would really like to know. Thanks > > > Netnod has an open source implementation for a 10GB Ethernet MAC > and connects that to an NTP server, all in FPGA. > It was not a generic UDP/IP stack, so they had some problems > with not beeing able to handle ICMP messages when I last > looked at the stuff 2 years ago. > > They split up incoming packets outside so that all UDP packet > to port 123 went to the FPGA.
So it's not a stand alone solution. Still, 10 Gbits is impressive. I've designed comms stuff at lower rates but still fast enough that things couldn't be done in single width, rather they had to be done in parallel. That gets complicated and big real fast as the speeds increase. But then "big" is a relative term. Yesterday's "big" is today's "fits down in the corner of this chip". Chips don't get faster so much these days, but they are still getting bigger! Rick C. ---- Tesla referral code - https://ts.la/richard11209
On 05/02/2019 00:18, Rick C. Hodgin wrote:
> On Monday, February 4, 2019 at 5:49:09 PM UTC-5, gnuarm.del...@gmail.com wrote: >> On Monday, February 4, 2019 at 5:13:10 PM UTC-5, Rick C. Hodgin wrote: >>> On Monday, February 4, 2019 at 4:24:05 PM UTC-5, gnuarm.del...@gmail.com wrote: >>>> On Monday, February 4, 2019 at 4:16:23 PM UTC-5, Rick C. Hodgin wrote: >>>>> In my opinion, it is only natural to do this. >>>> >>>> ...They said "no processors" and I take them at their word. >>>> What you fail to understand ... is that they most likely don't >>>> have a stored program processor of any type because that would >>>> constitute software and they wish to be able to claim there is >>>> "no software" even though HDL is really not much different >>>> from software. >>> >>> They didn't say "no software," only this: >>> >>> "...but this is the protocol stack written in VHDL with >>> no C and no processor and no ‘hardware compilation’ from >>> software..." >>> >>> They only indicate it's an original VHDL implementation, with no >>> C, no processor, and no hardware compilation from software, which >>> I take to mean they don't have a design in some emulator that they >>> then take and translate into some VHDL synthesized version of their >>> emulator design, but rather it's all in VHDL. >>> >>> Now, using logic, nothing in their statement precludes them from >>> having a non-C-based source code language that runs inside their >>> proprietary tiny VHDL-only core, one written in VHDL from scratch, >>> but one which emulates the version they wrote on their workbench >>> for their emulator. >> >> Except for the part you quoted that says, "no processor"... But >> then you want to define the language the way it suits you best. >> Duh! > > I take the "no processor" to mean they aren't using an embedded > processor. >
You accept that they say "no processor", you understand they are not using "an embedded processor", yet you think they are using a "proprietary tiny VHDL-only core" to run software? What do you see as the difference between a "processor" that runs software and a "core" that runs software? (Hint - there is /no/ difference, and this design does not use a processor, or a core - whatever term you choose).
>> Besides there are other places where they indicate "no software". > > I haven't read those.
Fair enough. Trust the judgement of people who have.
> >>> As I say, it's only natural to do this type of emulation first, >>> and then do it in hardware after the proof of concept and the >>> working out of the bugs. >> >> What emulation??? What are you talking about exactly? > > A software emulation of their hardware design that allows them to > write their compilers, linkers, test programs, and design the whole > hardware device in emulation prior to writing one line of VHDL code.
They don't have compilers, linkers, test programs - they don't have any software running on the device. (They will, obviously, run simulations on their VHDL during development.)
> >> What makes you think they hadn't already done everything you >> seem to be talking about and have it 100% in hardware/HDL when >> this was written? > > It's possible they did that, but I would be surprised and amazed > if it were so.
So you keep saying. So be surprised, and be amazed, because that is what they have done.
> >> Oh, I know why, because that doesn't suit the first idea that >> came into your head and you are totally incapable of backing >> away from a wrong opinion, just like always. > > I've said multiple times in this thread I could be wrong. However, > I do not believe I am. When it is proven I am, I will admit it.
The only proof anyone has is the information on their webpage. But it is clear enough to others. Your choices are to read it and believe that there is no processor or software of any kind in their design, or read it and believe they are lying. Reading some of it and misinterpreting that bit based on your preconceived notions and biases, despite others helping you with explanations, is not a logical option.
> >>> You have to read what's there, as well as what isn't there. They >>> never said "no software" but only no C, and no hardware compila- >>> tion from software. It doesn't mean they don't have their own >>> assembly language, or a custom compiler that doesn't use C, to >>> write their own software layer, to run on their own hardware. >> >> There is other language to indicate they don't have software in the FPGA, you just choose to ignore it. Most likely because of your limitations to back away from a thought once you've made it even if it is wrong. > > Point it out to me. Quote specific portions and I'll acknowledge > it if I was wrong. >
Just start with the bit already discussed - it is sufficient on its own. However, you can go further and read about their justifications and motivations for the design - the idea is that without software, the whole thing will be faster and more secure.
>>> Think about it. I could be wrong in my interpretation. But you >>> could also be wrong in yours. And whereas you are quick to point >>> out to me where I make my mistakes and how I am wrong ... are you >>> willing to turn that scrutinizing assessment back upon yourself? >> >> You are saying they have a processor because that's the way you >> think it should be done. > > I said I would be surprised and amazed if they didn't. I didn't > say they didn't. I said, "I'd wager..." and other such language > indicating my opinion. Those phrases were intermixed with me also > saying many times, "I could be wrong." >
You've said you'll admit being wrong when shown that you are wrong. You are wrong, you've been shown to be wrong - now accept that. (There is absolutely no problem with being wrong, especially about something you think is surprising and amazing - there is only a problem when you continue to deny it after the facts are on the table.)
>> The whole point of this product was that it didn't involve soft- >> ware for whatever purposes they had. > > I view software in the form they're talking about as being some > external source, a ROM or flash-like device that they can read > the program which runs it from. Traditional software operates in > this way. >
Then your view of "software" is muddled. That may explain your misunderstandings about the design - so let's try to correct this particular mistake. In FPGAs, ASICs, microcontrollers, and any other large chip, it is not uncommon to have software /within/ the device. This can be given as an array of data in VHDL or Verilog, or come from other sources, and be turned into ROM or initialised RAM within the device. It can be for boot code, setup code, microcode, programmable state machines, or all sorts of other purposes. It is still software. A "processor" and "software" means you have one device - the processor - that reads sequences of instructions - the software - and executes those instructions. It does not matter whether the software is external, developed independently, written in any particular language.
> If their marketing department is trying to veer away from that > traditional model, it would be to their benefit to say they do > not have software, referring to them not having it in the tradi- > tional sense, but I'd wager they do have some kind of software > in their design, albeit of the non-traditional form. I'd wager > they could change their design apart from VHDL (unless the code > they have is baked into VHDL data, but even then they're not > really changing the VHDL but only the VHDL data), re-synthesize, > and have a new core without changing any of the FSM designs on > the inside, and now it works with a new version of their soft- > ware, reflecting their changes. > > I could be wrong.
You are not wrong to say that saying they have no software is a benefit to their marketing department - and if you want to suspect them of lying for marketing purposes, that's up to you. But you are wrong to say your views here are consistent with the design they have described.
> >> Designing a processor in the FPGA and then writing code for >> it to implement a TCP/IP stack is a pointless way to do it >> and provides no market advantage in this case. > > A traditional CPU yes. But a specialized CPU ... not at all. > It would be a specialized design for this purpose, with several > instructions which operate the FSMs which do their job in a seq- > uenced execution of FSM manipulation. I see this as a very de- > sirable solution on many levels. But, I could be wrong.
It would be a pointless task, because designing a specialised CPU is a very expensive task (in time, resources, money, risk, etc.) and would provide very little gain for that investment for a task like a TCP/IP stack. Specialising an existing soft CPU by adding instructions geared towards faster TCP/IP processing - /that/ could make sense.
> >> If you were talking about a solution that had no other constraints, >> I would say a combination of software and hardware might be >> useful, but even then what parts of the TCP/IP stack can be >> done in software so that it doesn't slow down the result? > > You don't design the CPU that way. You design the CPU to have > an instruction that handles the necessary CISC-like operations > via a single instruction. It directs the hardware you've de- > signed specifically to execute a particular task, and it does > so by software. It stores things internally in a way that does > allow for later post-unit manipulation across a common / shared > bus, and then allows them to be sent "off-CPU" on the main bus > to other units for additional processing. > > It is how I would do it. :-)
Other people would not design a CPU for that task. They would use existing CPUs.
> >> If you don't wish to believe any of this, I guess that's fine. >> You have shown many times before that you only believe the >> first thought that comes to your mind and are entirely incapable >> of believing evidence based on it's merits once you have formed >> an opinion. That likely explains a lot of the things you believe >> in. > > You have no evidence to back up that claim, and I have mountains > of evidence which prove the contrary. >
Your "evidence" is that you, personally, would be "amazed and surprised" if there is no software. That is not something anyone else considers evidence of any kind, much less "mountains". On the "no software" side, there is all the information on their website.
>> I've said as much to you as I can. Feel free to continue without me. > > "And they were forced to eat Robin's minstrels." > "And there was much rejoicing." >
On 04/02/2019 21:55, gnuarm.deletethisbit@gmail.com wrote:

> I don't know a lot about TCP/IP, but I've been told you can implement it to many different degrees depending on your requirements. I think it had to do with the fact that some aspects are specified rather vaguely, timeouts and who manages the retries, etc. I assume this was not as full an implementation as you might have on a PC. So I wonder if this is an apples to oranges comparison. >
That is correct - there are lots of things in IP networking in general, and TCP/IP on top of that, which can be simplified, limited, or handled statically. For example, TCP/IP has window size control so that each end can automatically adjust if there is a part of the network that has a small MTU (packet size) - that way there will be less fragmentation, and greater throughput. That is an issue if you have dial-up modems and similar links - if you have a more modern network, you could simply assume a larger window size and leave it fixed. There are a good many such parts of the stack that can be simplified.
> Are there any companies selling TCP/IP that they actually list on their web site? >
Am 04.02.2019 um 10:20 schrieb Swapnil Patil:
> On Monday, February 4, 2019 at 11:59:45 AM UTC+5:30, Swapnil Patil > wrote: >> Hello folks, >> >> Let's say I have Spartan 6 board only and i wanted to implement >> Ethernet communication.So how can it be done? >> >> I don't want to connect any Hard or Soft core processor. also I >> have looked into WIZnet W5300 Ethernet controller interfacing to >> spartan 6, but I don't want to connect any such controller just >> spartan 6. So how can it be done? >> >> It is not necessary to use spartan 6 board only.If it possible to >> workout with any another boards I would really like to know. >> Thanks > > > Thanks for replies. I understand it's not easy to implement still i > want to give a try. If you have any links or document of work done > related to this please share. Rick C. could you tell more how one > should start to implement this with cores? I also wanted to know more > about these written cores. Hans is it possible we can get information > about work that companies made you know about? Thanks. >
You might want to read this: https://www.fpga4fun.com/10BASE-T.html Thomas
On Tuesday, February 5, 2019 at 12:12:47 PM UTC+2, David Brown wrote:
> On 04/02/2019 21:55, gnuarm.deletethisbit@gmail.com wrote: > > > I don't know a lot about TCP/IP, but I've been told you can implement it to many different degrees depending on your requirements. I think it had to do with the fact that some aspects are specified rather vaguely, timeouts and who manages the retries, etc. I assume this was not as full an implementation as you might have on a PC. So I wonder if this is an apples to oranges comparison. > > > > That is correct - there are lots of things in IP networking in general, > and TCP/IP on top of that, which can be simplified, limited, or handled > statically. For example, TCP/IP has window size control so that each > end can automatically adjust if there is a part of the network that has > a small MTU (packet size) - that way there will be less fragmentation, > and greater throughput. That is an issue if you have dial-up modems and > similar links - if you have a more modern network, you could simply > assume a larger window size and leave it fixed. There are a good many > such parts of the stack that can be simplified. > > > > > Are there any companies selling TCP/IP that they actually list on their web site? > >
TCP window size and MTU are orthogonal concepts. Judged by this post, I'd suspect that you know more about TCP that Rick C, but less than Rick H which sounds like the only one of 3 of you that had his own hands dirty in attempt to implement it.
On Tuesday, February 5, 2019 at 12:33:18 AM UTC+2, Tom Gardner wrote:
> > Back in the late 80s there was the perception that TCP was > slow, and hence new transport protocols were developed to > mitigate that, e.g. XTP. > > In reality, it wasn't TCP per se that was slow. Rather > the implementation, particularly multiple copies of data > as the packet went up the stack, and between network > processor / main processor and between kernel and user > space.
TCP per se *is* slow when frame error rate of underlying layers is not near zero. Also, there exist cases of "interesting" interactions between Nagle algorithm at transmitter and ACK saving algorithm at receiver that can lead to slowness of certain styles of TCP conversions (Send mid-size block of data, wait for application-level acknowledge, send next mid-size block) that is typically resolved by not following the language of RFCs too literally.
On 07/02/2019 11:07, already5chosen@yahoo.com wrote:
> On Tuesday, February 5, 2019 at 12:12:47 PM UTC+2, David Brown wrote: >> On 04/02/2019 21:55, gnuarm.deletethisbit@gmail.com wrote: >> >>> I don't know a lot about TCP/IP, but I've been told you can implement it to many different degrees depending on your requirements. I think it had to do with the fact that some aspects are specified rather vaguely, timeouts and who manages the retries, etc. I assume this was not as full an implementation as you might have on a PC. So I wonder if this is an apples to oranges comparison. >>> >> >> That is correct - there are lots of things in IP networking in general, >> and TCP/IP on top of that, which can be simplified, limited, or handled >> statically. For example, TCP/IP has window size control so that each >> end can automatically adjust if there is a part of the network that has >> a small MTU (packet size) - that way there will be less fragmentation, >> and greater throughput. That is an issue if you have dial-up modems and >> similar links - if you have a more modern network, you could simply >> assume a larger window size and leave it fixed. There are a good many >> such parts of the stack that can be simplified. >> >> >> >>> Are there any companies selling TCP/IP that they actually list on their web site? >>> > > TCP window size and MTU are orthogonal concepts. > Judged by this post, I'd suspect that you know more about TCP that Rick C, but less than Rick H which sounds like the only one of 3 of you that had his own hands dirty in attempt to implement it. >
They are different concepts, yes, window size can be reduced to below MTU size on small systems to ensure that you don't get fragmentation, and you don't need to resend more than one low-level packet. But it is not a level of detail that I have needed to work at, so I have no personal experience of that.
On 07/02/19 10:23, already5chosen@yahoo.com wrote:
> On Tuesday, February 5, 2019 at 12:33:18 AM UTC+2, Tom Gardner wrote: >> >> Back in the late 80s there was the perception that TCP was slow, and hence >> new transport protocols were developed to mitigate that, e.g. XTP. >> >> In reality, it wasn't TCP per se that was slow. Rather the implementation, >> particularly multiple copies of data as the packet went up the stack, and >> between network processor / main processor and between kernel and user >> space. > > TCP per se *is* slow when frame error rate of underlying layers is not near > zero.
That's a problem with any transport protocol. The solution to underlying frame errors is FEC, but that reduces the bandwidth when there are no errors. Choose what you optimise for!
> Also, there exist cases of "interesting" interactions between Nagle algorithm > at transmitter and ACK saving algorithm at receiver that can lead to slowness > of certain styles of TCP conversions (Send mid-size block of data, wait for > application-level acknowledge, send next mid-size block) that is typically > resolved by not following the language of RFCs too literally.
That sounds like a "corner case". I'd be surprised if you couldn't find corner cases in all transport protocols.
On Thursday, February 7, 2019 at 10:04:09 PM UTC+2, Tom Gardner wrote:
> On 07/02/19 10:23, already5chosen@yahoo.com wrote: > > On Tuesday, February 5, 2019 at 12:33:18 AM UTC+2, Tom Gardner wrote: > >> > >> Back in the late 80s there was the perception that TCP was slow, and hence > >> new transport protocols were developed to mitigate that, e.g. XTP. > >> > >> In reality, it wasn't TCP per se that was slow. Rather the implementation, > >> particularly multiple copies of data as the packet went up the stack, and > >> between network processor / main processor and between kernel and user > >> space. > > > > TCP per se *is* slow when frame error rate of underlying layers is not near > > zero. > > That's a problem with any transport protocol. >
TCP is worse than most. Partly because it's jack of all trades in terms of latency and bandwidth. Partly, because it's stream (rather than datagram) oriented, which makes recovery, based on selective retransmission far more complicated=less practical.
> The solution to underlying frame errors is FEC, but that > reduces the bandwidth when there are no errors. Choose > what you optimise for! > > > > Also, there exist cases of "interesting" interactions between Nagle algorithm > > at transmitter and ACK saving algorithm at receiver that can lead to slowness > > of certain styles of TCP conversions (Send mid-size block of data, wait for > > application-level acknowledge, send next mid-size block) that is typically > > resolved by not following the language of RFCs too literally. > > That sounds like a "corner case". I'd be surprised > if you couldn't find corner cases in all transport > protocols.
Sure. But not a rare corner case. And again, far less likely to happen to datagram-oriented reliable transports.
On Thu, 07 Feb 2019 20:04:04 +0000, Tom Gardner wrote:

> On 07/02/19 10:23, already5chosen@yahoo.com wrote: >> On Tuesday, February 5, 2019 at 12:33:18 AM UTC+2, Tom Gardner wrote: >>> >>> Back in the late 80s there was the perception that TCP was slow, and >>> hence new transport protocols were developed to mitigate that, e.g. >>> XTP. >>> >>> In reality, it wasn't TCP per se that was slow. Rather the >>> implementation, >>> particularly multiple copies of data as the packet went up the stack, >>> and between network processor / main processor and between kernel and >>> user space. >> >> TCP per se *is* slow when frame error rate of underlying layers is not >> near zero. > > That's a problem with any transport protocol. > > The solution to underlying frame errors is FEC, but that reduces the > bandwidth when there are no errors. Choose what you optimise for!
FEC does reduce bandwidth in some sense, but in all of the Ethernet FEC implementations I've done, the 64B66B signal is recoded into something more efficient to make room for the FEC overhead. IOW, the raw bit rate on the fibre is the same whether FEC is on or off. Perhaps a more important issue is latency. In my experience these are block codes, and the entire block must be received before it can be corrected. The last one I did added about 240ns when FEC was enabled. Optics modules (e.g. QSFP) that have sufficient margin to work without FEC are sometimes marketed as "low latency" even though they have the same latency as the ones that require FEC. Regards, Allan