FPGARelated.com
Forums

ddr with multiple users

Started by David Ashley September 7, 2006
KJ wrote:
>... > is necessary to go from interface #10 to interface #11. Presumably > this is simply the OpenCores DDR controller or some other commercial > controller. >...
I'm looking at the opencores DDR controller for reference and educational purposes, that's what it appears most suited for. I'm all new to VHDL and FPGA design mind you, but one has to start somewhere. I did a fair amount of 7400 series logic design about 1980-1982 but things are a bit more intricate now.
> Good question. I can't really give details, but I'll say that I've > implemented the approach that I mentioned for interfacing six masters > to DDR and the logic resources consumed were less than but roughly > comparable to that consumed by a single DDR controller. I had all the > same issues that you're aware of regarding how you need to properly > control DDR to get good performance and all.
This is what I consider the most important paragraph of your response, and based on this I'll probably abandon the multiple DDR aware master idea. My understanding of wishbone is certainly incomplete, I had forgotten it was a master/slave system for connecting 2 endpoints. What I had been thinking of was sort of like one of the xilinx busses (opb?) where they just wire-OR control signals together, and all inactive bus drivers are supposed to drive their signals to logic 0 when they don't own the bus. This boils down to each of the 4 "masters" just has some representation of the DDR's pins, plus a mechanism to request the bus. Until the bus has been granted, each master shuts up. Once bus is granted, the owning master can then diddle the lines and it's like that single master is controlling the DDR itself. Now in retrospect it occurs to me the main benefit of something like that would be in minimizing latency, but only in the case where the DDR is mostly inactive. If it's frequently being used, then each master must wait for its turn anyway and latency is out the window.
>>Complications: >>1) To support bursting, it needs to have some sort of fifo. An easy way >>would be the core stores up the whole burst, then transacts it to the >>DDR when all is known. > > I'd suggest keeping along that train of thought as you go forward but > keep refining it.
I'm starting to like this approach. Each master could then just queue up an access, say WRITE = ADDRESS + Some number of 32 bit words of data to put there READ = ADDRESS + the number of words you want from there. In either case data gets shoved into a fifo owned by the master. Once the transaction is queued up, the master just needs to wait until it's done. Let's see what the masters are: 1) CPU doing cache line fills + flushes, no single beat reads/writes 2) Batches of audio data for read 3) Video data for read 4) Perhaps DMA channels initiated by the CPU, transfer from BRAM to memory, say for ethernet packets. 2,3,4 latency isn't an issue. #1 latency can be minimized if the cpu uses BRAM as a cache, which is the intent. Thanks for taking the time to write all that! Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architecture
David Ashley wrote:
>>> Complications: >>> 1) To support bursting, it needs to have some sort of fifo. An easy way >>> would be the core stores up the whole burst, then transacts it to the >>> DDR when all is known. >> I'd suggest keeping along that train of thought as you go forward but >> keep refining it. > > I'm starting to like this approach. Each master could then just queue > up an access, say > WRITE = ADDRESS + Some number of 32 bit words of data to put there > READ = ADDRESS + the number of words you want from there. > > In either case data gets shoved into a fifo owned by the master. Once > the transaction is queued up, the master just needs to wait until it's > done. > > Let's see what the masters are: > 1) CPU doing cache line fills + flushes, no single beat reads/writes > 2) Batches of audio data for read > 3) Video data for read > 4) Perhaps DMA channels initiated by the CPU, transfer from BRAM > to memory, say for ethernet packets. > > 2,3,4 latency isn't an issue. #1 latency can be minimized if the > cpu uses BRAM as a cache, which is the intent. > > Thanks for taking the time to write all that! > Dave
Since routing multiple 32+bits buses consumes a fair amount of routing and control logic which needs tweaking whenever the design changes, I have been considering ring buses for future designs. As long as latency is not a primary issue, the ring-bus can also be used for data streaming, with the memory controller simply being one more possible target/initiator node. Using dual ring buses (clockwise + counter-clockwise) to link critical nodes can take care of most latency concerns by improving proximity. For large and extremely intensive applications like GPUs, the memory controller can have multiple ring bus taps to further increase bandwidth and reduce latency - look at ATI's X1600 GPUs. Ring buses are great in ASICs since they have no a-priori routing constraints, I wonder how well this would apply to FPGAs since these are optimized for linear left-to-right data paths, give or take a few rows/columns. (I did some preliminary work on this and the partial prototype reached 240MHz on V4LX25-10, limited mostly by routing and 4:1 muxes IIRC.) -- Daniel Sauvageau moc.xortam@egavuasd Matrox Graphics Inc. 1155 St-Regis, Dorval, Qc, Canada 514-822-6000
Daniel S. wrote:
> David Ashley wrote: > >>> Complications: > >>> 1) To support bursting, it needs to have some sort of fifo. An easy way > >>> would be the core stores up the whole burst, then transacts it to the > >>> DDR when all is known. > >> I'd suggest keeping along that train of thought as you go forward but > >> keep refining it. > > > > I'm starting to like this approach. Each master could then just queue > > up an access, say > > WRITE = ADDRESS + Some number of 32 bit words of data to put there > > READ = ADDRESS + the number of words you want from there. > > > > In either case data gets shoved into a fifo owned by the master. Once > > the transaction is queued up, the master just needs to wait until it's > > done. > > > > Let's see what the masters are: > > 1) CPU doing cache line fills + flushes, no single beat reads/writes > > 2) Batches of audio data for read > > 3) Video data for read > > 4) Perhaps DMA channels initiated by the CPU, transfer from BRAM > > to memory, say for ethernet packets. > > > > 2,3,4 latency isn't an issue. #1 latency can be minimized if the > > cpu uses BRAM as a cache, which is the intent. > > > > Thanks for taking the time to write all that! > > Dave > > Since routing multiple 32+bits buses consumes a fair amount of routing > and control logic which needs tweaking whenever the design changes, I > have been considering ring buses for future designs. As long as latency > is not a primary issue, the ring-bus can also be used for data > streaming, with the memory controller simply being one more possible > target/initiator node. > > Using dual ring buses (clockwise + counter-clockwise) to link critical > nodes can take care of most latency concerns by improving proximity. For > large and extremely intensive applications like GPUs, the memory > controller can have multiple ring bus taps to further increase bandwidth > and reduce latency - look at ATI's X1600 GPUs. > > Ring buses are great in ASICs since they have no a-priori routing > constraints, I wonder how well this would apply to FPGAs since these are > optimized for linear left-to-right data paths, give or take a few > rows/columns. (I did some preliminary work on this and the partial > prototype reached 240MHz on V4LX25-10, limited mostly by routing and 4:1 > muxes IIRC.) > > -- > Daniel Sauvageau > moc.xortam@egavuasd > Matrox Graphics Inc. > 1155 St-Regis, Dorval, Qc, Canada > 514-822-6000
Hi Daniel, Here is my suggestion. For example, there are 5 components which have access to DDR controller module. What I would like to do is: 1. Each of 5 components has an output buffer shared by DDR controller module; 2. DDR controller module has an output bus shared by all 5 components as their input bus. Each data has an additional bit to indicate if it is a data or a command. If it is a command, it indicates which the output bus is targeting at. If it is a data, the data belongs to the targeted component. Output data streams look like this: Command; data; ... data; Command; data; ... data; In the command data, you may add any information you like. The best benefit of this scheme is it has no delays and no penalty in performance, and it has minimum number of buses. I don't see ring bus has any benefits over my scheme. In ring situation, you must have (N+1)*2 buses for N >= 2. In my scheme, it must have N+1 buses, where N is the number of components, excluding DDR controller module. Weng
Weng Tianxiang wrote:
> Daniel S. wrote: > >>David Ashley wrote: >> >>>>>Complications: >>>>>1) To support bursting, it needs to have some sort of fifo. An easy way >>>>>would be the core stores up the whole burst, then transacts it to the >>>>>DDR when all is known. >>>> >>>>I'd suggest keeping along that train of thought as you go forward but >>>>keep refining it. >>> >>>I'm starting to like this approach. Each master could then just queue >>>up an access, say >>>WRITE = ADDRESS + Some number of 32 bit words of data to put there >>>READ = ADDRESS + the number of words you want from there. >>> >>>In either case data gets shoved into a fifo owned by the master. Once >>>the transaction is queued up, the master just needs to wait until it's >>>done. >>> >>>Let's see what the masters are: >>>1) CPU doing cache line fills + flushes, no single beat reads/writes >>>2) Batches of audio data for read >>>3) Video data for read >>>4) Perhaps DMA channels initiated by the CPU, transfer from BRAM >>>to memory, say for ethernet packets. >>> >>>2,3,4 latency isn't an issue. #1 latency can be minimized if the >>>cpu uses BRAM as a cache, which is the intent. >>> >>>Thanks for taking the time to write all that! >>>Dave >> >>Since routing multiple 32+bits buses consumes a fair amount of routing >>and control logic which needs tweaking whenever the design changes, I >>have been considering ring buses for future designs. As long as latency >>is not a primary issue, the ring-bus can also be used for data >>streaming, with the memory controller simply being one more possible >>target/initiator node. >> >>Using dual ring buses (clockwise + counter-clockwise) to link critical >>nodes can take care of most latency concerns by improving proximity. For >>large and extremely intensive applications like GPUs, the memory >>controller can have multiple ring bus taps to further increase bandwidth >>and reduce latency - look at ATI's X1600 GPUs. >> >>Ring buses are great in ASICs since they have no a-priori routing >>constraints, I wonder how well this would apply to FPGAs since these are >>optimized for linear left-to-right data paths, give or take a few >>rows/columns. (I did some preliminary work on this and the partial >>prototype reached 240MHz on V4LX25-10, limited mostly by routing and 4:1 >>muxes IIRC.) >> >>-- >>Daniel Sauvageau >>moc.xortam@egavuasd >>Matrox Graphics Inc. >>1155 St-Regis, Dorval, Qc, Canada >>514-822-6000 > > > Hi Daniel, > Here is my suggestion. > For example, there are 5 components which have access to DDR controller > module. > What I would like to do is: > 1. Each of 5 components has an output buffer shared by DDR controller > module; > 2. DDR controller module has an output bus shared by all 5 components > as their input bus. > > Each data has an additional bit to indicate if it is a data or a > command. > If it is a command, it indicates which the output bus is targeting at. > If it is a data, the data belongs to the targeted component. > > Output data streams look like this: > Command; > data; > ... > data; > Command; > data; > ... > data; > > In the command data, you may add any information you like. > The best benefit of this scheme is it has no delays and no penalty in > performance, and it has minimum number of buses. > > I don't see ring bus has any benefits over my scheme. > > In ring situation, you must have (N+1)*2 buses for N >= 2. In my > scheme, it must have N+1 buses, where N is the number of components, > excluding DDR controller module. > > Weng >
Weng, Your strategy seems to make sense to me. I don't actually know what a ring buffer is. Your design seems appropriate for the imbalance built into the system -- that is, any of the 5 components can initiate a command at any time, however the DDR controller can only respond to one command at a time. So you don't need a unique link to each component for data coming from the DDR. However thinking a little more on it, each of the 5 components must have logic to ignore the data that isn't targeted at themselves. Also in order to be able to deal with data returned from the DDR at a later time, perhaps a component might store it in a fifo anyway. The approach I had sort of been envisioning involved for each component you have 2 fifos, one goes for commands and data from the component to the ddr, and the other is for data coming back from the ddr. The ddr controller just needs to decide which component to pull commands from -- round robin would be fine for my application. If it's a read command, it need only stuff the returned data in the right fifo. I don't know, I think I like your approach. One can always add a 2nd fifo for read data if desired, and I think the logic to ignore others' data is probably trivial... -Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architecture
"David Ashley" <dash@nowhere.net.dont.email.me> wrote in message 
news:4505047b$1_1@x-privat.org...
> Weng Tianxiang wrote: >> Hi Daniel, >> Here is my suggestion. >> For example, there are 5 components which have access to DDR controller >> module. >> What I would like to do is: >> 1. Each of 5 components has an output buffer shared by DDR controller >> module;
Not sure what is being 'shared'. If it is the actual DDR output pins then this is problematic....you likely won't be able to meet DDR timing when those DDR signals are coming and spread out to 5 locations instead of just one as it would be with a standard DDR controller. Even if it did work for 5 it wouldn't scale well either (i.e. 10 users of the DDR). If what is 'shared' is the output from the 5 component that feed in to the input of the DDR controller, than you're talking about internal tri-states which may be a problem depending on which target device is in question. <snip>
>> In the command data, you may add any information you like. >> The best benefit of this scheme is it has no delays and no penalty in >> performance, and it has minimum number of buses.
You haven't convinced me of any of these points. Plus how it would address the pecularities of DDRs themselves where there is a definite performance hit for randomly thrashing about in memory has not been addressed.
>> >> Weng >> > > Weng, > > Your strategy seems to make sense to me. I don't actually know what a > ring buffer is. Your design seems appropriate for the imbalance built > into the system -- that is, any of the 5 components can initiate a > command at any time, however the DDR controller can only respond > to one command at a time. So you don't need a unique link to each > component for data coming from the DDR.
A unique link to an arbitrator though allows each component to 'think' that it is running independently and addressing DDR at the same time. In other words, all 5 components can start up their own transaction at the exact same time. The arbitration logic function would buffer up all 5, selecting one of them for output to the DDR. When reading DDR this might not help performance much but for writing it can be a huge difference.
> > However thinking a little more on it, each of the 5 components must > have logic to ignore the data that isn't targeted at themselves. Also > in order to be able to deal with data returned from the DDR at a > later time, perhaps a component might store it in a fifo anyway. > > The approach I had sort of been envisioning involved for each > component you have 2 fifos, one goes for commands and data > from the component to the ddr, and the other is for data coming > back from the ddr. The ddr controller just needs to decide which > component to pull commands from -- round robin would be fine > for my application. If it's a read command, it need only stuff the > returned data in the right fifo.
That's one approach. If you think some more on this you should be able to see a way to have a single fifo for the readback data from the DDR (instead of one per component). KJ
KJ wrote:
> "David Ashley" <dash@nowhere.net.dont.email.me> wrote in message > news:4505047b$1_1@x-privat.org... > > Weng Tianxiang wrote: > >> Hi Daniel, > >> Here is my suggestion. > >> For example, there are 5 components which have access to DDR controller > >> module. > >> What I would like to do is: > >> 1. Each of 5 components has an output buffer shared by DDR controller > >> module; > Not sure what is being 'shared'. If it is the actual DDR output pins then > this is problematic....you likely won't be able to meet DDR timing when > those DDR signals are coming and spread out to 5 locations instead of just > one as it would be with a standard DDR controller. Even if it did work for > 5 it wouldn't scale well either (i.e. 10 users of the DDR). > > If what is 'shared' is the output from the 5 component that feed in to the > input of the DDR controller, than you're talking about internal tri-states > which may be a problem depending on which target device is in question. > > <snip> > >> In the command data, you may add any information you like. > >> The best benefit of this scheme is it has no delays and no penalty in > >> performance, and it has minimum number of buses. > You haven't convinced me of any of these points. Plus how it would address > the pecularities of DDRs themselves where there is a definite performance > hit for randomly thrashing about in memory has not been addressed. > >> > >> Weng > >> > > > > Weng, > > > > Your strategy seems to make sense to me. I don't actually know what a > > ring buffer is. Your design seems appropriate for the imbalance built > > into the system -- that is, any of the 5 components can initiate a > > command at any time, however the DDR controller can only respond > > to one command at a time. So you don't need a unique link to each > > component for data coming from the DDR. > A unique link to an arbitrator though allows each component to 'think' that > it is running independently and addressing DDR at the same time. In other > words, all 5 components can start up their own transaction at the exact same > time. The arbitration logic function would buffer up all 5, selecting one > of them for output to the DDR. When reading DDR this might not help > performance much but for writing it can be a huge difference. > > > > > However thinking a little more on it, each of the 5 components must > > have logic to ignore the data that isn't targeted at themselves. Also > > in order to be able to deal with data returned from the DDR at a > > later time, perhaps a component might store it in a fifo anyway. > > > > The approach I had sort of been envisioning involved for each > > component you have 2 fifos, one goes for commands and data > > from the component to the ddr, and the other is for data coming > > back from the ddr. The ddr controller just needs to decide which > > component to pull commands from -- round robin would be fine > > for my application. If it's a read command, it need only stuff the > > returned data in the right fifo. > That's one approach. If you think some more on this you should be able to > see a way to have a single fifo for the readback data from the DDR (instead > of one per component). > > KJ
Hi, My scheme is not only a strategy, but a finished work. The following is more to disclose. 1. What means sharing between 1 component and DDR controller system is: The output fifo of one component are shared by one component and DDR controller module, one component uses write half and DDR uses another read half. 2. The output fifo uses the same technique as what I mentioned in the previous email: command word and data words are mixed, but there are more than that: The command word contains either write or read commands. So in the output fifo, data stream looks like this: Read command, address, number of bytes; Write command, address, number of bytes; Data; ... Data; Write command, address, number of bytes; Data; ... Data; Read command, address, number of bytes; Read command, address, number of bytes; ... 3. In DDR controller side, there is small logic to pick read commands from input command/data stream, then put them into a read command queue that is used by DDR module to access read commands. You don't have to worry why read command is put behind a write command. For all components, if a read command is issued after a write command, the read command cannot be executed until write data is fully written into DDR system to avoid interfering the write/read order. 4. The DDR has its output fifo and a different output bus. The output fifo plays a buffer that separate coupling between DDR its own operations and output function. DDR read data from DDR memory and put data into its output fifo. There is output bus driver that picks up data from the DDR output buffer, then put it in output bus in a format that target component likes best. Then the output bus is shared by 5 components which read their own data, like a wireless communication channel: they only listen and get their own data on the output bus, never inteference with others. 5. All components work at their full speeds. 6. Arbitor module resides in DDR controller module. It doesn't control which component should output data, but it controls which fifo should be read first to avoid its fullness and determine how to insert commands into DDR command streams that will be sent to DDR chip. In that way, all output fifo will work in full speeds according to their own rules. 7. Every component must have a read fifo to store data read from DDR output bus. One cannot skip the read fifo, because you must have a capability to adjust read speed for each component and read data from DDR output bus will disappear after 1 clock. In short, each component has a write fifo whose read side is used by DDR controller and a read fifo that picks data from DDR controller output bus. In the result, the number of wires used for communications between DDR controller and all components are dramatically reduced at least by more than 100 wires for a 5 component system. What is the other problem? Weng
Weng Tianxiang wrote:
> KJ wrote: > > "David Ashley" <dash@nowhere.net.dont.email.me> wrote in message > > news:4505047b$1_1@x-privat.org... > > > Weng Tianxiang wrote: > > >> Hi Daniel, > > >> Here is my suggestion. > > >> For example, there are 5 components which have access to DDR controller > > >> module. > > >> What I would like to do is: > > >> 1. Each of 5 components has an output buffer shared by DDR controller > > >> module; > > Not sure what is being 'shared'. If it is the actual DDR output pins then > > this is problematic....you likely won't be able to meet DDR timing when > > those DDR signals are coming and spread out to 5 locations instead of just > > one as it would be with a standard DDR controller. Even if it did work for > > 5 it wouldn't scale well either (i.e. 10 users of the DDR). > > > > If what is 'shared' is the output from the 5 component that feed in to the > > input of the DDR controller, than you're talking about internal tri-states > > which may be a problem depending on which target device is in question. > > > > <snip> > > >> In the command data, you may add any information you like. > > >> The best benefit of this scheme is it has no delays and no penalty in > > >> performance, and it has minimum number of buses. > > You haven't convinced me of any of these points. Plus how it would address > > the pecularities of DDRs themselves where there is a definite performance > > hit for randomly thrashing about in memory has not been addressed. > > >> > > >> Weng > > >> > > > > > > Weng, > > > > > > Your strategy seems to make sense to me. I don't actually know what a > > > ring buffer is. Your design seems appropriate for the imbalance built > > > into the system -- that is, any of the 5 components can initiate a > > > command at any time, however the DDR controller can only respond > > > to one command at a time. So you don't need a unique link to each > > > component for data coming from the DDR. > > A unique link to an arbitrator though allows each component to 'think' that > > it is running independently and addressing DDR at the same time. In other > > words, all 5 components can start up their own transaction at the exact same > > time. The arbitration logic function would buffer up all 5, selecting one > > of them for output to the DDR. When reading DDR this might not help > > performance much but for writing it can be a huge difference. > > > > > > > > However thinking a little more on it, each of the 5 components must > > > have logic to ignore the data that isn't targeted at themselves. Also > > > in order to be able to deal with data returned from the DDR at a > > > later time, perhaps a component might store it in a fifo anyway. > > > > > > The approach I had sort of been envisioning involved for each > > > component you have 2 fifos, one goes for commands and data > > > from the component to the ddr, and the other is for data coming > > > back from the ddr. The ddr controller just needs to decide which > > > component to pull commands from -- round robin would be fine > > > for my application. If it's a read command, it need only stuff the > > > returned data in the right fifo. > > That's one approach. If you think some more on this you should be able to > > see a way to have a single fifo for the readback data from the DDR (instead > > of one per component). > > > > KJ > > Hi, > My scheme is not only a strategy, but a finished work. The following is > more to disclose. > > 1. What means sharing between 1 component and DDR controller system is: > The output fifo of one component are shared by one component and DDR > controller module, one component uses write half and DDR uses another > read half. > > 2. The output fifo uses the same technique as what I mentioned in the > previous email: > command word and data words are mixed, but there are more than that: > The command word contains either write or read commands. > > So in the output fifo, data stream looks like this: > Read command, address, number of bytes; > Write command, address, number of bytes; > Data; > ... > Data; > Write command, address, number of bytes; > Data; > ... > Data; > Read command, address, number of bytes; > Read command, address, number of bytes; > ... > > 3. In DDR controller side, there is small logic to pick read commands > from input command/data stream, then put them into a read command queue > that is used by DDR module to access read commands. You don't have to > worry why read command is put behind a write command. For all > components, if a read command is issued after a write command, the read > command cannot be executed until write data is fully written into DDR > system to avoid interfering the write/read order. > > 4. The DDR has its output fifo and a different output bus. The output > fifo plays a buffer that separate coupling between DDR its own > operations and output function. > > DDR read data from DDR memory and put data into its output fifo. There > is output bus driver that picks up data from the DDR output buffer, > then put it in output bus in a format that target component likes best. > Then the output bus is shared by 5 components which read their own > data, like a wireless communication channel: they only listen and get > their own data on the output bus, never inteference with others. > > 5. All components work at their full speeds. > > 6. Arbitor module resides in DDR controller module. It doesn't control > which component should output data, but it controls which fifo should > be read first to avoid its fullness and determine how to insert > commands into DDR command streams that will be sent to DDR chip. In > that way, all output fifo will work in full speeds according to their > own rules. > > 7. Every component must have a read fifo to store data read from DDR > output bus. One cannot skip the read fifo, because you must have a > capability to adjust read speed for each component and read data from > DDR output bus will disappear after 1 clock. > > In short, each component has a write fifo whose read side is used by > DDR controller and a read fifo that picks data from DDR controller > output bus. > > In the result, the number of wires used for communications between DDR > controller and all components are dramatically reduced at least by more > than 100 wires for a 5 component system. > > What is the other problem? >
Weng, OK, I'm a bit clearer now on what you have now. What you've described is (I think) also functionally identical to what I was suggesting earlier (which is also a working, tested and shipping design).
>From a design reuse standpoint it is not quite as good as what I
suggested though. A better partioning would be to have the fifos and control logic in a standalone module. Each component would talk point to point with this new module on one side (equivalent to your components writing commands and data into the fifo). The function of this module would be to select (based on whatever arbitration algorithm is preferable) and output commands over a point to point connection to a standard DDR Controller (this is equivalent to your DDR Controller 'read' side of the fifo). This module is essentially the bus arbitration module. Whether implemented as a standalone module (as I've done) or embedded into a customized DDR Controller (as you've done) ends up with the same functionality and should result in the same logic/resource usage and result in a working design that can run the DDRs at the best possible rate. But in my case, I now have a standalone arbitration module with standardized interfaces that can be used to arbitrate with totally different things other than DDRs. In my case, I instantiated three arbitrators that connected to three separate DDRs (two with six masters, one with 12) and a fourth arbitrator that connected 13 bus masters to a single PCI bus. No code changes are required, only change the generics when instantiating the module to essentially 'tune' it to the particular usage. One other point: you probably don't need a read data fifo per component, you can get away with just one single fifo inside the arbitration module. That fifo would not hold the read data but just the code to tell the arbitrator who to route the read data back to. The arbitor would write this code into the fifo at the point where it initiates a read to the DDR controller. The read data itself could be broadcast to all components in parallel once it arrives back. Only one component though would get the signal flagging that the data was valid based on a simple decode of the above mentioned code that the arbitor put into the small read fifo. In other words, this fifo would only need to be wide enough to handle the number of users (i.e. 5 masters would imply a 3 bit code is needed) and only deep enough to handle whatever the latency is between initiating a read command to the DDR controller and when the data actually comes back. KJ
KJ wrote:
> Weng Tianxiang wrote: > > KJ wrote: > > > "David Ashley" <dash@nowhere.net.dont.email.me> wrote in message > > > news:4505047b$1_1@x-privat.org... > > > > Weng Tianxiang wrote: > > > >> Hi Daniel, > > > >> Here is my suggestion. > > > >> For example, there are 5 components which have access to DDR controller > > > >> module. > > > >> What I would like to do is: > > > >> 1. Each of 5 components has an output buffer shared by DDR controller > > > >> module; > > > Not sure what is being 'shared'. If it is the actual DDR output pins then > > > this is problematic....you likely won't be able to meet DDR timing when > > > those DDR signals are coming and spread out to 5 locations instead of just > > > one as it would be with a standard DDR controller. Even if it did work for > > > 5 it wouldn't scale well either (i.e. 10 users of the DDR). > > > > > > If what is 'shared' is the output from the 5 component that feed in to the > > > input of the DDR controller, than you're talking about internal tri-states > > > which may be a problem depending on which target device is in question. > > > > > > <snip> > > > >> In the command data, you may add any information you like. > > > >> The best benefit of this scheme is it has no delays and no penalty in > > > >> performance, and it has minimum number of buses. > > > You haven't convinced me of any of these points. Plus how it would address > > > the pecularities of DDRs themselves where there is a definite performance > > > hit for randomly thrashing about in memory has not been addressed. > > > >> > > > >> Weng > > > >> > > > > > > > > Weng, > > > > > > > > Your strategy seems to make sense to me. I don't actually know what a > > > > ring buffer is. Your design seems appropriate for the imbalance built > > > > into the system -- that is, any of the 5 components can initiate a > > > > command at any time, however the DDR controller can only respond > > > > to one command at a time. So you don't need a unique link to each > > > > component for data coming from the DDR. > > > A unique link to an arbitrator though allows each component to 'think' that > > > it is running independently and addressing DDR at the same time. In other > > > words, all 5 components can start up their own transaction at the exact same > > > time. The arbitration logic function would buffer up all 5, selecting one > > > of them for output to the DDR. When reading DDR this might not help > > > performance much but for writing it can be a huge difference. > > > > > > > > > > > However thinking a little more on it, each of the 5 components must > > > > have logic to ignore the data that isn't targeted at themselves. Also > > > > in order to be able to deal with data returned from the DDR at a > > > > later time, perhaps a component might store it in a fifo anyway. > > > > > > > > The approach I had sort of been envisioning involved for each > > > > component you have 2 fifos, one goes for commands and data > > > > from the component to the ddr, and the other is for data coming > > > > back from the ddr. The ddr controller just needs to decide which > > > > component to pull commands from -- round robin would be fine > > > > for my application. If it's a read command, it need only stuff the > > > > returned data in the right fifo. > > > That's one approach. If you think some more on this you should be able to > > > see a way to have a single fifo for the readback data from the DDR (instead > > > of one per component). > > > > > > KJ > > > > Hi, > > My scheme is not only a strategy, but a finished work. The following is > > more to disclose. > > > > 1. What means sharing between 1 component and DDR controller system is: > > The output fifo of one component are shared by one component and DDR > > controller module, one component uses write half and DDR uses another > > read half. > > > > 2. The output fifo uses the same technique as what I mentioned in the > > previous email: > > command word and data words are mixed, but there are more than that: > > The command word contains either write or read commands. > > > > So in the output fifo, data stream looks like this: > > Read command, address, number of bytes; > > Write command, address, number of bytes; > > Data; > > ... > > Data; > > Write command, address, number of bytes; > > Data; > > ... > > Data; > > Read command, address, number of bytes; > > Read command, address, number of bytes; > > ... > > > > 3. In DDR controller side, there is small logic to pick read commands > > from input command/data stream, then put them into a read command queue > > that is used by DDR module to access read commands. You don't have to > > worry why read command is put behind a write command. For all > > components, if a read command is issued after a write command, the read > > command cannot be executed until write data is fully written into DDR > > system to avoid interfering the write/read order. > > > > 4. The DDR has its output fifo and a different output bus. The output > > fifo plays a buffer that separate coupling between DDR its own > > operations and output function. > > > > DDR read data from DDR memory and put data into its output fifo. There > > is output bus driver that picks up data from the DDR output buffer, > > then put it in output bus in a format that target component likes best. > > Then the output bus is shared by 5 components which read their own > > data, like a wireless communication channel: they only listen and get > > their own data on the output bus, never inteference with others. > > > > 5. All components work at their full speeds. > > > > 6. Arbitor module resides in DDR controller module. It doesn't control > > which component should output data, but it controls which fifo should > > be read first to avoid its fullness and determine how to insert > > commands into DDR command streams that will be sent to DDR chip. In > > that way, all output fifo will work in full speeds according to their > > own rules. > > > > 7. Every component must have a read fifo to store data read from DDR > > output bus. One cannot skip the read fifo, because you must have a > > capability to adjust read speed for each component and read data from > > DDR output bus will disappear after 1 clock. > > > > In short, each component has a write fifo whose read side is used by > > DDR controller and a read fifo that picks data from DDR controller > > output bus. > > > > In the result, the number of wires used for communications between DDR > > controller and all components are dramatically reduced at least by more > > than 100 wires for a 5 component system. > > > > What is the other problem? > > > Weng, > > OK, I'm a bit clearer now on what you have now. What you've described > is (I think) also functionally identical to what I was suggesting > earlier (which is also a working, tested and shipping design). > > >From a design reuse standpoint it is not quite as good as what I > suggested though. A better partioning would be to have the fifos and > control logic in a standalone module. Each component would talk point > to point with this new module on one side (equivalent to your > components writing commands and data into the fifo). The function of > this module would be to select (based on whatever arbitration algorithm > is preferable) and output commands over a point to point connection to > a standard DDR Controller (this is equivalent to your DDR Controller > 'read' side of the fifo). This module is essentially the bus > arbitration module. > > Whether implemented as a standalone module (as I've done) or embedded > into a customized DDR Controller (as you've done) ends up with the same > functionality and should result in the same logic/resource usage and > result in a working design that can run the DDRs at the best possible > rate. > > But in my case, I now have a standalone arbitration module with > standardized interfaces that can be used to arbitrate with totally > different things other than DDRs. In my case, I instantiated three > arbitrators that connected to three separate DDRs (two with six > masters, one with 12) and a fourth arbitrator that connected 13 bus > masters to a single PCI bus. No code changes are required, only change > the generics when instantiating the module to essentially 'tune' it to > the particular usage. > > One other point: you probably don't need a read data fifo per > component, you can get away with just one single fifo inside the > arbitration module. That fifo would not hold the read data but just > the code to tell the arbitrator who to route the read data back to. > The arbitor would write this code into the fifo at the point where it > initiates a read to the DDR controller. The read data itself could be > broadcast to all components in parallel once it arrives back. Only one > component though would get the signal flagging that the data was valid > based on a simple decode of the above mentioned code that the arbitor > put into the small read fifo. In other words, this fifo would only > need to be wide enough to handle the number of users (i.e. 5 masters > would imply a 3 bit code is needed) and only deep enough to handle > whatever the latency is between initiating a read command to the DDR > controller and when the data actually comes back. > > KJ
Hi KJ, 1. My design never use module design methodology. I use a big file to contain all logic statements except modules from Xilinx core. If a segment is to be used for other project, just a copy and paste to do the same things as module methodology does, but all signal names never change cross all function modules. 2. Individual read fifo is needed for each component. The reason is issuing a read command and the data read back are not synchronous and one must have its own read fifo to store its own read data. After reading data falled into its read fifo, each components can decide what next to do on its own situation. If only one read buffer is used, big problems would arise. For example, if you have PCI-x/PCI bus, if their modules have read data, they cannot immediately transfer the read data until they get PCI-x/PCI bus control. That process may last very long, for example 1K clocks, causing other read data blocked by one read buffer design. 3. Strategically, by using my method one has a great flexibility to do anything you want in the fastest speed and with minimum wire connections among DDR controller and all components. Actually in my design there is no arbitor, because there is no common bus to arbitrate. There is onle write-fifo select logic to decide which write fifo should be picked first to write its data into DDR chip, based on many factors, not only because one write fifo has data. The many write factors include: a. write priority; b. write address if it falls into the same bank+column of current write command; c. if write fifo is approaching to be full, depending on the source date input rate; d. ... 4. Different components have different priority to access to DDR controller. You may imagine, for example, there are 2 PowerPC, one PCI-e, one PCI-x, one Gigabit stream. You may put priority table as like this to handle read commands: a. two PowerPC has top priority and they have equal rights to access DDR; b. PCI-e may the lowest one in priority, because it is a package protocol, any delays do few damages to the performance if any. c. ... Weng
Weng Tianxiang wrote:
> KJ wrote: > > Weng Tianxiang wrote: > > > KJ wrote: > > > > "David Ashley" <dash@nowhere.net.dont.email.me> wrote in message > > > > news:4505047b$1_1@x-privat.org... > > > > > Weng Tianxiang wrote: > > > > >> Hi Daniel, > > > > >> Here is my suggestion. > > > > >> For example, there are 5 components which have access to DDR controller > > > > >> module. > > > > >> What I would like to do is: > > > > >> 1. Each of 5 components has an output buffer shared by DDR controller > > > > >> module; > > > > Not sure what is being 'shared'. If it is the actual DDR output pins then > > > > this is problematic....you likely won't be able to meet DDR timing when > > > > those DDR signals are coming and spread out to 5 locations instead of just > > > > one as it would be with a standard DDR controller. Even if it did work for > > > > 5 it wouldn't scale well either (i.e. 10 users of the DDR). > > > > > > > > If what is 'shared' is the output from the 5 component that feed in to the > > > > input of the DDR controller, than you're talking about internal tri-states > > > > which may be a problem depending on which target device is in question. > > > > > > > > <snip> > > > > >> In the command data, you may add any information you like. > > > > >> The best benefit of this scheme is it has no delays and no penalty in > > > > >> performance, and it has minimum number of buses. > > > > You haven't convinced me of any of these points. Plus how it would address > > > > the pecularities of DDRs themselves where there is a definite performance > > > > hit for randomly thrashing about in memory has not been addressed. > > > > >> > > > > >> Weng > > > > >> > > > > > > > > > > Weng, > > > > > > > > > > Your strategy seems to make sense to me. I don't actually know what a > > > > > ring buffer is. Your design seems appropriate for the imbalance built > > > > > into the system -- that is, any of the 5 components can initiate a > > > > > command at any time, however the DDR controller can only respond > > > > > to one command at a time. So you don't need a unique link to each > > > > > component for data coming from the DDR. > > > > A unique link to an arbitrator though allows each component to 'think' that > > > > it is running independently and addressing DDR at the same time. In other > > > > words, all 5 components can start up their own transaction at the exact same > > > > time. The arbitration logic function would buffer up all 5, selecting one > > > > of them for output to the DDR. When reading DDR this might not help > > > > performance much but for writing it can be a huge difference. > > > > > > > > > > > > > > However thinking a little more on it, each of the 5 components must > > > > > have logic to ignore the data that isn't targeted at themselves. Also > > > > > in order to be able to deal with data returned from the DDR at a > > > > > later time, perhaps a component might store it in a fifo anyway. > > > > > > > > > > The approach I had sort of been envisioning involved for each > > > > > component you have 2 fifos, one goes for commands and data > > > > > from the component to the ddr, and the other is for data coming > > > > > back from the ddr. The ddr controller just needs to decide which > > > > > component to pull commands from -- round robin would be fine > > > > > for my application. If it's a read command, it need only stuff the > > > > > returned data in the right fifo. > > > > That's one approach. If you think some more on this you should be able to > > > > see a way to have a single fifo for the readback data from the DDR (instead > > > > of one per component). > > > > > > > > KJ > > > > > > Hi, > > > My scheme is not only a strategy, but a finished work. The following is > > > more to disclose. > > > > > > 1. What means sharing between 1 component and DDR controller system is: > > > The output fifo of one component are shared by one component and DDR > > > controller module, one component uses write half and DDR uses another > > > read half. > > > > > > 2. The output fifo uses the same technique as what I mentioned in the > > > previous email: > > > command word and data words are mixed, but there are more than that: > > > The command word contains either write or read commands. > > > > > > So in the output fifo, data stream looks like this: > > > Read command, address, number of bytes; > > > Write command, address, number of bytes; > > > Data; > > > ... > > > Data; > > > Write command, address, number of bytes; > > > Data; > > > ... > > > Data; > > > Read command, address, number of bytes; > > > Read command, address, number of bytes; > > > ... > > > > > > 3. In DDR controller side, there is small logic to pick read commands > > > from input command/data stream, then put them into a read command queue > > > that is used by DDR module to access read commands. You don't have to > > > worry why read command is put behind a write command. For all > > > components, if a read command is issued after a write command, the read > > > command cannot be executed until write data is fully written into DDR > > > system to avoid interfering the write/read order. > > > > > > 4. The DDR has its output fifo and a different output bus. The output > > > fifo plays a buffer that separate coupling between DDR its own > > > operations and output function. > > > > > > DDR read data from DDR memory and put data into its output fifo. There > > > is output bus driver that picks up data from the DDR output buffer, > > > then put it in output bus in a format that target component likes best. > > > Then the output bus is shared by 5 components which read their own > > > data, like a wireless communication channel: they only listen and get > > > their own data on the output bus, never inteference with others. > > > > > > 5. All components work at their full speeds. > > > > > > 6. Arbitor module resides in DDR controller module. It doesn't control > > > which component should output data, but it controls which fifo should > > > be read first to avoid its fullness and determine how to insert > > > commands into DDR command streams that will be sent to DDR chip. In > > > that way, all output fifo will work in full speeds according to their > > > own rules. > > > > > > 7. Every component must have a read fifo to store data read from DDR > > > output bus. One cannot skip the read fifo, because you must have a > > > capability to adjust read speed for each component and read data from > > > DDR output bus will disappear after 1 clock. > > > > > > In short, each component has a write fifo whose read side is used by > > > DDR controller and a read fifo that picks data from DDR controller > > > output bus. > > > > > > In the result, the number of wires used for communications between DDR > > > controller and all components are dramatically reduced at least by more > > > than 100 wires for a 5 component system. > > > > > > What is the other problem? > > > > > Weng, > > > > OK, I'm a bit clearer now on what you have now. What you've described > > is (I think) also functionally identical to what I was suggesting > > earlier (which is also a working, tested and shipping design). > > > > >From a design reuse standpoint it is not quite as good as what I > > suggested though. A better partioning would be to have the fifos and > > control logic in a standalone module. Each component would talk point > > to point with this new module on one side (equivalent to your > > components writing commands and data into the fifo). The function of > > this module would be to select (based on whatever arbitration algorithm > > is preferable) and output commands over a point to point connection to > > a standard DDR Controller (this is equivalent to your DDR Controller > > 'read' side of the fifo). This module is essentially the bus > > arbitration module. > > > > Whether implemented as a standalone module (as I've done) or embedded > > into a customized DDR Controller (as you've done) ends up with the same > > functionality and should result in the same logic/resource usage and > > result in a working design that can run the DDRs at the best possible > > rate. > > > > But in my case, I now have a standalone arbitration module with > > standardized interfaces that can be used to arbitrate with totally > > different things other than DDRs. In my case, I instantiated three > > arbitrators that connected to three separate DDRs (two with six > > masters, one with 12) and a fourth arbitrator that connected 13 bus > > masters to a single PCI bus. No code changes are required, only change > > the generics when instantiating the module to essentially 'tune' it to > > the particular usage. > > > > One other point: you probably don't need a read data fifo per > > component, you can get away with just one single fifo inside the > > arbitration module. That fifo would not hold the read data but just > > the code to tell the arbitrator who to route the read data back to. > > The arbitor would write this code into the fifo at the point where it > > initiates a read to the DDR controller. The read data itself could be > > broadcast to all components in parallel once it arrives back. Only one > > component though would get the signal flagging that the data was valid > > based on a simple decode of the above mentioned code that the arbitor > > put into the small read fifo. In other words, this fifo would only > > need to be wide enough to handle the number of users (i.e. 5 masters > > would imply a 3 bit code is needed) and only deep enough to handle > > whatever the latency is between initiating a read command to the DDR > > controller and when the data actually comes back. > > > > KJ > > Hi KJ, > 1. My design never use module design methodology. I use a big file to > contain all logic statements except modules from Xilinx core. > > If a segment is to be used for other project, just a copy and paste to > do the same things as module methodology does, but all signal names > never change cross all function modules. > > 2. Individual read fifo is needed for each component. The reason is > issuing a read command and the data read back are not synchronous and > one must have its own read fifo to store its own read data. After > reading data falled into its read fifo, each components can decide what > next to do on its own situation. > > If only one read buffer is used, big problems would arise. For example, > if you have PCI-x/PCI bus, if their modules have read data, they cannot > immediately transfer the read data until they get PCI-x/PCI bus > control. That process may last very long, for example 1K clocks, > causing other read data blocked by one read buffer design. > > 3. Strategically, by using my method one has a great flexibility to do > anything you want in the fastest speed and with minimum wire > connections among DDR controller and all components. > > Actually in my design there is no arbitor, because there is no common > bus to arbitrate. There is onle write-fifo select logic to decide which > write fifo should be picked first to write its data into DDR chip, > based on many factors, not only because one write fifo has data. > > The many write factors include: > a. write priority; > b. write address if it falls into the same bank+column of current write > command; > c. if write fifo is approaching to be full, depending on the source > date input rate; > d. ... > > 4. Different components have different priority to access to DDR > controller. You may imagine, for example, there are 2 PowerPC, one > PCI-e, one PCI-x, one Gigabit stream. You may put priority table as > like this to handle read commands: > a. two PowerPC has top priority and they have equal rights to access > DDR; > b. PCI-e may the lowest one in priority, because it is a package > protocol, any delays do few damages to the performance if any. > c. ... > > Weng
Hi KJ, If you like, please put your module interface in the group and I would like to indicate which wires are redundent if my design was implemented. "In my case, I instantiated three arbitrators that connected to three separate DDRs (two with six masters, one with 12) and a fourth arbitrator that connected 13 bus masters to a single PCI bus." What you did is to expand PCI bus arbitor idea to DDR input bus. In my design DDR doesn't need a bus arbitor at all. All components connected with a DDR controller have no common bus to share and they provide the best performance over yours. So from this point of view, my DDR controller interface has nothing common with yours. Both work, but in different strategies. My strategy is more complex than yours, but with best performance. It saves a middle write fifo for DDR controller: DDR controller has no special write fifo, it uses all component write fifo as its write fifo, saving clocks and memory space, getting best performance for DDR controller. Weng
Weng Tianxiang wrote:
><big cut> > 1. My design never use module design methodology. I use a big file to > contain all logic statements except modules from Xilinx core. > > If a segment is to be used for other project, just a copy and paste to > do the same things as module methodology does, but all signal names > never change cross all function modules.
This is an interesting point. I just finished "VHDL for Logic Synthesis" by Andrew Rushton, a book recommended by earlier post a few weeks ago so I bought a copy. Rushton goes to great pains to say multiple times: "The natural form of hierarchy in VHDL, at least when it is used for RTL design, is the component. Do not be tempted to use subprograms as a form of hierarchical design! Any entity/architecture pair can be used as a component in a higher level architecture. Thus, complex circuits can be built up in stages from lower level components." I was convinced by his arguments + examples. I'd think having a modular component approach wouldn't harm you, because during synthesis redundant interfaces + wires + logic would likely get optimized away. So the overriding factor is choosing which is easiest to implement, understand, maintain, share, etc. IE human factors. Having said that as a 'c' programmer I almost never create libraries. I have source code that does what I want, for a specific task. Later if I have to do something similiar, I go look at what I've already done and copy sections of code out as needed. Perfect example is the Berkeley Sockets layer, the library calls are so obscure all you want to do is cut and paste something you managed to get working before, to do the same thing again...Alternative would be to wrap the sockets interface in something else, supposedly simpler. But then it wouldn't have all the functionality... -Dave -- David Ashley http://www.xdr.com/dash Embedded linux, device drivers, system architecture