> On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
>> Hi
>>
>> does anybody have real and realistic performance figures for Xilinx
>> GbE solution with XPS_TEMAC/MPMC ?
>>
>> we need to get 60% of GbE wirespeed, UDP transmit only but it seems
>> like real hard target to reach :(
>>
>> MPMC has memory latency of 23 cycles (added to EACH memory access
>> cycle) so the ethernet
>> SDMA takes a lot of bandwith already, there is another DMA writing
>> data at same speed, and the
>> PPC itself uses the same memory too
>>
>> Antti
>
> With custom Ethernet core + MPMC we get data rates slightly above
> 100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
> for code and data access, at least one streaming data source (custom PIM
> for NPI) and the custom Ethernet IP (MAC + some packet composers,
> decoders, etc.) again connected to NPI.
> We rejected to use XPS_TEMAC because its low performance. The problem is
> I lost my benchmark results. Sorry.
>
> Jan
Hello,
FYI, we've developed a while ago a solution we name GEDEK that
implements 100% GbE performance (we guarantee the simultaneous
generation and reception of back-to-back Gigabit Ethernet Frames
without delay nor loss, and our hardware stack has UDP, some ICMP &
ARP, without requiring a processor (hardware stack indeed). Available &
tested on Xilinx & Altera, 100M, 1GbE, or dual-speed. We provide botrh
ends (FPGA block and PC Win/Linux API in source code). We have options
for Remote Flash Programming, Virtual UARTs, WOL etc.
Documentation and demos for both vendors are available on demand at
info at alse-fr not calm.
Bert
Reply by OutputLogic●June 3, 20092009-06-03
On Jun 3, 5:33=A0am, "john.orla...@gmail.com" <john.orla...@gmail.com>
wrote:
> <snip>
>
>
>
>
>
> > > > > hum.. my current task is to optimize a XPS_TEMAC based system
> > > > > (with 1 single DDR2 chip as main memory!)
> > > > > to reach about 580MBps
>
> > > > > :(
>
> > > > > I have never said that to be possible, but i need money :(
> > > > > and if the goal cant be reached there will be none...
>
> > > > > over 100MBps is sure possible (with XPS_TEMAC too)
> > > > > =A0but 580MBps is beyong doable i think for sure
>
> > > > > Antti
>
> > > > Just a simple calculation:
> > > > 125000000 / 1024 / 1024 =3D 119.2MBps
> > > > It is without protocol overhead, FCS, IFGs. How do you want to exce=
ed
> > > > limit of Gigabit Ethernet?
>
> > > > Jan
>
> > > Or did I get you wrong and you talk about Mbits per second? I was
> > > talking about Mbytes per sec.
> > > If it is so, your goal should be reachable using xps_ll_temac instead=
of
> > > xps_temac.
> > > Jan
>
> > oh, i am talking wrong today
> > yes Mbit/sec or Mbps
> > and sure XPS_LL_TEMAC with ALL hardware options tuned to maximum
> > and we do not copy buffers, and do not calc UDP checksum with PPC
>
> > but, even TRECKs marketing booklet promised only 355 Mbps for MTU1500
> > abd i need 580Mbps
>
> > Antti
>
> Antti,
> The USRP2 (http://en.wikipedia.org/wiki/
> Universal_Software_Radio_Peripheral) is a software-defined radio that
> uses a Spartan III + gigE PHY chip to reach 800 Mbits/sec sustained.
> I believe the MAC in their FPGA has a few limitations (only supports
> 1000 Base-T) but was originally based on the opencores tri-mode MAC
> (though significant modifications were needed to make it reliable,
> IIRC). =A0The other caveat here is that the USRP2 guys push raw ethernet
> frames into a PC...i.e., they don't use TCP or UDP. =A0I believe their
> analysis showed that they needed a custom network layer to support the
> sustained high data rates.
>
> So I wouldn't give up hope on making something work here at 580 Mbits/
> sec. =A0All of the USRP2 code (software + HDL) is open-sourced, and
> should be available through their subversion repositories.
>
> Good Luck,
> John
I'm using UDP and getting sustainable 600-700Mbits/sec. In fact this
number is limited by the PC side: either a network card or a stack.
- outputlogic
Reply by john...@gmail.com●June 3, 20092009-06-03
<snip>
> > > > hum.. my current task is to optimize a XPS_TEMAC based system
> > > > (with 1 single DDR2 chip as main memory!)
> > > > to reach about 580MBps
>
> > > > :(
>
> > > > I have never said that to be possible, but i need money :(
> > > > and if the goal cant be reached there will be none...
>
> > > > over 100MBps is sure possible (with XPS_TEMAC too)
> > > > =A0but 580MBps is beyong doable i think for sure
>
> > > > Antti
>
> > > Just a simple calculation:
> > > 125000000 / 1024 / 1024 =3D 119.2MBps
> > > It is without protocol overhead, FCS, IFGs. How do you want to exceed
> > > limit of Gigabit Ethernet?
>
> > > Jan
>
> > Or did I get you wrong and you talk about Mbits per second? I was
> > talking about Mbytes per sec.
> > If it is so, your goal should be reachable using xps_ll_temac instead o=
f
> > xps_temac.
> > Jan
>
> oh, i am talking wrong today
> yes Mbit/sec or Mbps
> and sure XPS_LL_TEMAC with ALL hardware options tuned to maximum
> and we do not copy buffers, and do not calc UDP checksum with PPC
>
> but, even TRECKs marketing booklet promised only 355 Mbps for MTU1500
> abd i need 580Mbps
>
> Antti
Antti,
The USRP2 (http://en.wikipedia.org/wiki/
Universal_Software_Radio_Peripheral) is a software-defined radio that
uses a Spartan III + gigE PHY chip to reach 800 Mbits/sec sustained.
I believe the MAC in their FPGA has a few limitations (only supports
1000 Base-T) but was originally based on the opencores tri-mode MAC
(though significant modifications were needed to make it reliable,
IIRC). The other caveat here is that the USRP2 guys push raw ethernet
frames into a PC...i.e., they don't use TCP or UDP. I believe their
analysis showed that they needed a custom network layer to support the
sustained high data rates.
So I wouldn't give up hope on making something work here at 580 Mbits/
sec. All of the USRP2 code (software + HDL) is open-sourced, and
should be available through their subversion repositories.
Good Luck,
John
Reply by MM●June 2, 20092009-06-02
<Antti.Lukats@googlemail.com> wrote in message news:244fac96-a937-48b4-949e-
>
> but, even TRECKs marketing booklet promised only 355 Mbps for MTU1500
> abd i need 580Mbps
Antti,
I think Treck numbers assume TCP/IP. I am actually in the middle of
evaluating the same thing. I have a design similar to the one described in
XAPP1041 running on a custom V4FX60 board and I seem to be getting the
numbers you are looking for (raw Ethernet Tx traffic) although it is early
for me to say whether they are "real", i.e. I haven't yet analyzed properly
what the Xilinx perf_app software does.
/Mikhail
Reply by Antt...@googlemail.com●June 2, 20092009-06-02
On 2 June, 21:50, Jan Pech <inva...@void.domain> wrote:
> On Tue, 2009-06-02 at 20:47 +0200, Jan Pech wrote:
> > On Tue, 2009-06-02 at 11:32 -0700, Antti.Luk...@googlemail.com wrote:
> > > On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
> > > > On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
> > > > > Hi
>
> > > > > does anybody have real and realistic performance figures for Xili=
nx
> > > > > GbE solution with XPS_TEMAC/MPMC ?
>
> > > > > we need to get 60% of GbE wirespeed, UDP transmit only but it see=
ms
> > > > > like real hard target to reach :(
>
> > > > > MPMC has memory latency of 23 cycles (added to EACH memory access
> > > > > cycle) so the ethernet
> > > > > SDMA takes a lot of bandwith already, there is another DMA writin=
g
> > > > > data at same speed, and the
> > > > > PPC itself uses the same memory too
>
> > > > > Antti
>
> > > > With custom Ethernet core + MPMC we get data rates slightly above
> > > > 100MBps, depending on MTU. The single memory is shared by MicroBlaz=
e/PPC
> > > > for code and data access, at least one streaming data source (custo=
m PIM
> > > > for NPI) and the custom Ethernet IP (MAC + some packet composers,
> > > > decoders, etc.) again connected to NPI.
> > > > We rejected to use XPS_TEMAC because its low performance. The probl=
em is
> > > > I lost my benchmark results. Sorry.
>
> > > > Jan
>
> > > hum.. my current task is to optimize a XPS_TEMAC based system
> > > (with 1 single DDR2 chip as main memory!)
> > > to reach about 580MBps
>
> > > :(
>
> > > I have never said that to be possible, but i need money :(
> > > and if the goal cant be reached there will be none...
>
> > > over 100MBps is sure possible (with XPS_TEMAC too)
> > > =A0but 580MBps is beyong doable i think for sure
>
> > > Antti
>
> > Just a simple calculation:
> > 125000000 / 1024 / 1024 =3D 119.2MBps
> > It is without protocol overhead, FCS, IFGs. How do you want to exceed
> > limit of Gigabit Ethernet?
>
> > Jan
>
> Or did I get you wrong and you talk about Mbits per second? I was
> talking about Mbytes per sec.
> If it is so, your goal should be reachable using xps_ll_temac instead of
> xps_temac.
> Jan
oh, i am talking wrong today
yes Mbit/sec or Mbps
and sure XPS_LL_TEMAC with ALL hardware options tuned to maximum
and we do not copy buffers, and do not calc UDP checksum with PPC
but, even TRECKs marketing booklet promised only 355 Mbps for MTU1500
abd i need 580Mbps
Antti
Reply by Antt...@googlemail.com●June 2, 20092009-06-02
On 2 June, 21:47, Jan Pech <inva...@void.domain> wrote:
> On Tue, 2009-06-02 at 11:32 -0700, Antti.Luk...@googlemail.com wrote:
> > On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
> > > On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
> > > > Hi
>
> > > > does anybody have real and realistic performance figures for Xilinx
> > > > GbE solution with XPS_TEMAC/MPMC ?
>
> > > > we need to get 60% of GbE wirespeed, UDP transmit only but it seems
> > > > like real hard target to reach :(
>
> > > > MPMC has memory latency of 23 cycles (added to EACH memory access
> > > > cycle) so the ethernet
> > > > SDMA takes a lot of bandwith already, there is another DMA writing
> > > > data at same speed, and the
> > > > PPC itself uses the same memory too
>
> > > > Antti
>
> > > With custom Ethernet core + MPMC we get data rates slightly above
> > > 100MBps, depending on MTU. The single memory is shared by MicroBlaze/=
PPC
> > > for code and data access, at least one streaming data source (custom =
PIM
> > > for NPI) and the custom Ethernet IP (MAC + some packet composers,
> > > decoders, etc.) again connected to NPI.
> > > We rejected to use XPS_TEMAC because its low performance. The problem=
is
> > > I lost my benchmark results. Sorry.
>
> > > Jan
>
> > hum.. my current task is to optimize a XPS_TEMAC based system
> > (with 1 single DDR2 chip as main memory!)
> > to reach about 580MBps
>
> > :(
>
> > I have never said that to be possible, but i need money :(
> > and if the goal cant be reached there will be none...
>
> > over 100MBps is sure possible (with XPS_TEMAC too)
> > =A0but 580MBps is beyong doable i think for sure
>
> > Antti
>
> Just a simple calculation:
> 125000000 / 1024 / 1024 =3D 119.2MBps
> It is without protocol overhead, FCS, IFGs. How do you want to exceed
> limit of Gigabit Ethernet?
>
> Jan
ups silly me :(
B is as byte
I wanted say we need 580Mbit/s
Antti
Reply by Antt...@googlemail.com●June 2, 20092009-06-02
On 2 June, 21:46, "Phil Jessop" <p...@noname.org> wrote:
> <Antti.Luk...@googlemail.com> wrote in message
>
> news:45a07ecd-3a6c-4047-a640-cb5706d0b26b@k2g2000yql.googlegroups.com...
>
>
>
> > On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
> >> On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
> >> > Hi
>
> >> > does anybody have real and realistic performance figures for Xilinx
> >> > GbE solution with XPS_TEMAC/MPMC ?
>
> >> > we need to get 60% of GbE wirespeed, UDP transmit only but it seems
> >> > like real hard target to reach :(
>
> >> > MPMC has memory latency of 23 cycles (added to EACH memory access
> >> > cycle) so the ethernet
> >> > SDMA takes a lot of bandwith already, there is another DMA writing
> >> > data at same speed, and the
> >> > PPC itself uses the same memory too
>
> >> > Antti
>
> >> With custom Ethernet core + MPMC we get data rates slightly above
> >> 100MBps, depending on MTU. The single memory is shared by MicroBlaze/P=
PC
> >> for code and data access, at least one streaming data source (custom P=
IM
> >> for NPI) and the custom Ethernet IP (MAC + some packet composers,
> >> decoders, etc.) again connected to NPI.
> >> We rejected to use XPS_TEMAC because its low performance. The problem =
is
> >> I lost my benchmark results. Sorry.
>
> >> Jan
>
> > hum.. my current task is to optimize a XPS_TEMAC based system
> > (with 1 single DDR2 chip as main memory!)
> > to reach about 580MBps
>
> > :(
>
> > I have never said that to be possible, but i need money :(
> > and if the goal cant be reached there will be none...
>
> > over 100MBps is sure possible (with XPS_TEMAC too)
> > but 580MBps is beyong doable i think for sure
>
> > Antti
>
> =A0>
>
> >over 100MBps is sure possible (with XPS_TEMAC too)
>
> really? over GbE? =A0impossible!
>
> I take it you mean over 100Mbps which is far more plausible.
>
> Phil
yes, sorry, i did mean
XPS_TEMAC/MPMC, GbE (1000 base-X fiber)
> 100MBps is OK
580MBps -- hardly possible
Antti
Reply by Jan Pech●June 2, 20092009-06-02
On Tue, 2009-06-02 at 20:47 +0200, Jan Pech wrote:
> On Tue, 2009-06-02 at 11:32 -0700, Antti.Lukats@googlemail.com wrote:
> > On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
> > > On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
> > > > Hi
> > >
> > > > does anybody have real and realistic performance figures for Xilinx
> > > > GbE solution with XPS_TEMAC/MPMC ?
> > >
> > > > we need to get 60% of GbE wirespeed, UDP transmit only but it seems
> > > > like real hard target to reach :(
> > >
> > > > MPMC has memory latency of 23 cycles (added to EACH memory access
> > > > cycle) so the ethernet
> > > > SDMA takes a lot of bandwith already, there is another DMA writing
> > > > data at same speed, and the
> > > > PPC itself uses the same memory too
> > >
> > > > Antti
> > >
> > > With custom Ethernet core + MPMC we get data rates slightly above
> > > 100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
> > > for code and data access, at least one streaming data source (custom PIM
> > > for NPI) and the custom Ethernet IP (MAC + some packet composers,
> > > decoders, etc.) again connected to NPI.
> > > We rejected to use XPS_TEMAC because its low performance. The problem is
> > > I lost my benchmark results. Sorry.
> > >
> > > Jan
> >
> > hum.. my current task is to optimize a XPS_TEMAC based system
> > (with 1 single DDR2 chip as main memory!)
> > to reach about 580MBps
> >
> > :(
> >
> > I have never said that to be possible, but i need money :(
> > and if the goal cant be reached there will be none...
> >
> > over 100MBps is sure possible (with XPS_TEMAC too)
> > but 580MBps is beyong doable i think for sure
> >
> > Antti
> >
>
> Just a simple calculation:
> 125000000 / 1024 / 1024 = 119.2MBps
> It is without protocol overhead, FCS, IFGs. How do you want to exceed
> limit of Gigabit Ethernet?
>
> Jan
>
Or did I get you wrong and you talk about Mbits per second? I was
talking about Mbytes per sec.
If it is so, your goal should be reachable using xps_ll_temac instead of
xps_temac.
Jan
Reply by Jan Pech●June 2, 20092009-06-02
On Tue, 2009-06-02 at 11:32 -0700, Antti.Lukats@googlemail.com wrote:
> On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
> > On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
> > > Hi
> >
> > > does anybody have real and realistic performance figures for Xilinx
> > > GbE solution with XPS_TEMAC/MPMC ?
> >
> > > we need to get 60% of GbE wirespeed, UDP transmit only but it seems
> > > like real hard target to reach :(
> >
> > > MPMC has memory latency of 23 cycles (added to EACH memory access
> > > cycle) so the ethernet
> > > SDMA takes a lot of bandwith already, there is another DMA writing
> > > data at same speed, and the
> > > PPC itself uses the same memory too
> >
> > > Antti
> >
> > With custom Ethernet core + MPMC we get data rates slightly above
> > 100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
> > for code and data access, at least one streaming data source (custom PIM
> > for NPI) and the custom Ethernet IP (MAC + some packet composers,
> > decoders, etc.) again connected to NPI.
> > We rejected to use XPS_TEMAC because its low performance. The problem is
> > I lost my benchmark results. Sorry.
> >
> > Jan
>
> hum.. my current task is to optimize a XPS_TEMAC based system
> (with 1 single DDR2 chip as main memory!)
> to reach about 580MBps
>
> :(
>
> I have never said that to be possible, but i need money :(
> and if the goal cant be reached there will be none...
>
> over 100MBps is sure possible (with XPS_TEMAC too)
> but 580MBps is beyong doable i think for sure
>
> Antti
>
Just a simple calculation:
125000000 / 1024 / 1024 = 119.2MBps
It is without protocol overhead, FCS, IFGs. How do you want to exceed
limit of Gigabit Ethernet?
Jan
Reply by Phil Jessop●June 2, 20092009-06-02
<Antti.Lukats@googlemail.com> wrote in message
news:45a07ecd-3a6c-4047-a640-cb5706d0b26b@k2g2000yql.googlegroups.com...
> On 2 June, 21:05, Jan Pech <inva...@void.domain> wrote:
>> On Tue, 2009-06-02 at 09:53 -0700, Antti wrote:
>> > Hi
>>
>> > does anybody have real and realistic performance figures for Xilinx
>> > GbE solution with XPS_TEMAC/MPMC ?
>>
>> > we need to get 60% of GbE wirespeed, UDP transmit only but it seems
>> > like real hard target to reach :(
>>
>> > MPMC has memory latency of 23 cycles (added to EACH memory access
>> > cycle) so the ethernet
>> > SDMA takes a lot of bandwith already, there is another DMA writing
>> > data at same speed, and the
>> > PPC itself uses the same memory too
>>
>> > Antti
>>
>> With custom Ethernet core + MPMC we get data rates slightly above
>> 100MBps, depending on MTU. The single memory is shared by MicroBlaze/PPC
>> for code and data access, at least one streaming data source (custom PIM
>> for NPI) and the custom Ethernet IP (MAC + some packet composers,
>> decoders, etc.) again connected to NPI.
>> We rejected to use XPS_TEMAC because its low performance. The problem is
>> I lost my benchmark results. Sorry.
>>
>> Jan
>
> hum.. my current task is to optimize a XPS_TEMAC based system
> (with 1 single DDR2 chip as main memory!)
> to reach about 580MBps
>
> :(
>
> I have never said that to be possible, but i need money :(
> and if the goal cant be reached there will be none...
>
> over 100MBps is sure possible (with XPS_TEMAC too)
> but 580MBps is beyong doable i think for sure
>
> Antti
>
>
>
>
>over 100MBps is sure possible (with XPS_TEMAC too)
really? over GbE? impossible!
I take it you mean over 100Mbps which is far more plausible.
Phil