Reply by Nahum Barnea October 12, 20032003-10-12
Richard Iachetta <iachetta@us.ibm.com> wrote in message news:<MPG.19f0c726fe699c2b98982b@ausnews.austin.ibm.com>...
> In article <3F85E01C.A0E99EFA@xilinx.com>, eric.crabill@xilinx.com says... > > you will need to put in some design > > effort. > > That's an understatement!
Hi Richard, Can you be so kind and share some details over the effort of the Xilinx PCI-X core solution of this problem. I have a similiar problem - only all ports are PCI-X. ThankX, NAHUM.
Reply by Nicholas C. Weaver October 10, 20032003-10-10
In article <3F86E3E4.BED89355@xilinx.com>,
Eric Crabill  <eric.crabill@xilinx.com> wrote:
>PCI and PCI-X are not busses that provide guaranteed >bandwidth. I've seen bandwidth on a PCI 64/66 bus fall >to 40 Mbytes/sec during certain operations because the >devices on it were designed poorly (mostly for the >reasons I stated in the first paragraph).
Also just type of transaction can contribute as well. Just TRY streaming through (in -> memory -> out) two 1 Gb ethernet ports when you have full rate, minimum sized packets, using PCI or PCI-X based hardware.
>I'm not trying to discourage you from using a Xilinx >solution. However, I'd prefer that potential customers >make informed design decisions that result in the best >combination of price/performance/features.
A very good attitude, I wish more companies would give such advice. -- Nicholas C. Weaver nweaver@cs.berkeley.edu
Reply by Richard Iachetta October 10, 20032003-10-10
In article <3F85E01C.A0E99EFA@xilinx.com>, eric.crabill@xilinx.com says...
> you will need to put in some design > effort.
That's an understatement! -- Rich Iachetta I do not speak for IBM
Reply by Eric Crabill October 10, 20032003-10-10
Hi,

Perhaps I am a bit jaded, but I think you will never
actually realize anything close to "full speed" using
PCI.  (PCI-X has some improvements in protocol).  Your
statement assumes that both the data source and the
data sink have an infinitely sized buffer, nobody uses
retries with delayed read requests, and you have huge
(kilobytes at a time) bursts.

> Aren't you cutting your bandwidth in half?
It depends -- are you talking about "theoretical" bandwidth, or bandwidth you are likely to achieve? If you are designing under the assumption that you will achieve every last byte of 533 Mbytes/sec on a PCI64/66 bus, you will have some disappointment coming. :) PCI and PCI-X are not busses that provide guaranteed bandwidth. I've seen bandwidth on a PCI 64/66 bus fall to 40 Mbytes/sec during certain operations because the devices on it were designed poorly (mostly for the reasons I stated in the first paragraph).
> like to have the pci66 busses be able to run at full > speed to access the host's memory (primary side of > the pcix133 bridge #1). If you drop to 66 MHz here > now my to secondary busses can only run at 1/2 there > bandwidth _if_ trying to access host memory at the > _same_ time.
While the point you raise is theoretically valid, you must consider that the bandwidth you achieve is going to be no greater than the weakest link in the path. What is the actual performance of the PCI-X 133 Host? How about your PCI 66 components? The bridge performance may be moot. An interesting experiment you could conduct would be to plug your PCI 66 component into a PCI 66 host, and see how close to "full speed" you can really get using a PCI/PCI-X protocol analyzer. Then, you could buy two bridge demo boards from a bridge manufacturer (PLX/Hint comes to mind...) and see what you get behind two bridges, configured as I described. I would certainly conduct this experiment as a way to justify the design time and expense of a custom bridge to myself or my manager. While I suspect you won't get half of "full speed" in either case, I am very often wrong. That's why I'm suggesting you try it out. I'm not trying to discourage you from using a Xilinx solution. However, I'd prefer that potential customers make informed design decisions that result in the best combination of price/performance/features. Good luck, Eric
Reply by Chad Bearden October 10, 20032003-10-10
Eric Crabill <eric.crabill@xilinx.com> wrote in message news:<3F85E01C.A0E99EFA@xilinx.com>...
> Hi, > > Logically, what you described can be built with three > PCI-X to PCI-X bridges. > > You can take bridge #1 from PCI-X 133 to PCI-X 66.
Aren't you cutting your bandwidth in half? I would like to have the pci66 busses be able to run at full speed to access the host's memory (primary side of the pcix133 bridge #1). If you drop to 66 MHz here now my to secondary busses can only run at 1/2 there bandwidth _if_ trying to access host memory at the _same_ time.
> On > that PCI-X 66 bus segment, you put bridge #2a and #2b, > both of which bridge from PCI-X 66 to PCI 66. So, you > can actually go buy three of these ASSPs and build > exactly what you want. > > I wouldn't want to turn you away from a Xilinx solution. > A Xilinx solution could be a one-chip solution, offer > lower latency, and provide you with the opportunity to > customize your design in a way you cannot with ASSPs. > However, you would want to carefully weigh the benefits > with the downsides -- you will need to put in some design > effort. Another thing to consider is cost, which will > be a function of the size of your final design. > > Good luck, > Eric >
Reply by Chad Bearden October 10, 20032003-10-10
If you mean putting both Tundra 310 bridges on a single pci-x133 bus I
don't think this is electrically supported.  As I understand it you
can only have one load on pci-x133 bus. Please correct me if I have
mis-stated your intention.

chad.

> If you're looking for an existing silicon solution I believe you could > do it with two Tundra Tsi310 parts. > > -hpa
Reply by October 9, 20032003-10-09
Followup to:  <906428f5.0310091342.3bb90eb3@posting.google.com>
By author:    chadb@beardendesigns.com (Chad Bearden)
In newsgroup: comp.arch.fpga
> > I would like to split a pci-x133 bus into 2 parallel pci-66 busses. > Has anyone done this? I'm not afraid to purchase the Xilinx pci-x > core and halfbridge IP but just looking for some wisdom. > > |--------| > | |<----- pci-66 -----> > <---pcix133--->| bridge | > | |<----- pci-66 -----> > |--------| >
If you're looking for an existing silicon solution I believe you could do it with two Tundra Tsi310 parts. -hpa -- <hpa@transmeta.com> at work, <hpa@zytor.com> in private! If you send me mail in HTML format I will assume it's spam. "Unix gives you enough rope to shoot yourself in the foot." Architectures needed: ia64 m68k mips64 ppc ppc64 s390 s390x sh v850 x86-64
Reply by Eric Crabill October 9, 20032003-10-09
Hi,

Logically, what you described can be built with three
PCI-X to PCI-X bridges.

You can take bridge #1 from PCI-X 133 to PCI-X 66.  On
that PCI-X 66 bus segment, you put bridge #2a and #2b,
both of which bridge from PCI-X 66 to PCI 66.  So, you
can actually go buy three of these ASSPs and build
exactly what you want.

I wouldn't want to turn you away from a Xilinx solution.
A Xilinx solution could be a one-chip solution, offer
lower latency, and provide you with the opportunity to
customize your design in a way you cannot with ASSPs.
However, you would want to carefully weigh the benefits
with the downsides -- you will need to put in some design
effort.  Another thing to consider is cost, which will
be a function of the size of your final design.

Good luck,
Eric

Chad Bearden wrote:
> > I would like to split a pci-x133 bus into 2 parallel pci-66 busses. > Has anyone done this? I'm not afraid to purchase the Xilinx pci-x > core and halfbridge IP but just looking for some wisdom. > > |--------| > | |<----- pci-66 -----> > <---pcix133--->| bridge | > | |<----- pci-66 -----> > |--------| > chad
Reply by Chad Bearden October 9, 20032003-10-09
I would like to split a pci-x133 bus into 2 parallel pci-66 busses. 
Has anyone done this?  I'm not afraid to purchase the Xilinx pci-x
core and halfbridge IP but just looking for some wisdom.

               |--------|              
               |        |<----- pci-66 ----->              
<---pcix133--->| bridge |
               |        |<----- pci-66 ----->              
               |--------|              
chad.