FPGARelated.com
Forums

PCI configuration for ML310

Started by igel...@gmail.com February 27, 2006
Hello All,
 I'm trying to get Linux working on the Xilinx ML310 using a PCI
hardware configuration, other than the one provided by Xilinx. I have a
base configuration which I created using the EDK 7.1sp1.

 I've been able to compile a 2.4.30pre-1 linuxppc kernel, after making
little modifications to the source due to some values (like the device
ID of the OPB-PCI bridge) were hardcoded into it. Right now I have the
kernel more or less working (there are sporadic kernel panics) if I use
no PCI devices but the IDE controller. In order to get the IDE working
I had to use as interrupt the 31. In the hardware configuration this
interrupt is connected to the PCI-SBR pin (by the way, what is this pin
for?).

 If I include other PCI device into the kernel configuration, the
driver is able to read the registers and memory from the device. The
problem is related to the interrupts, it seems that only the "SBR
interrupt" is being received.  At this point, I have some questions
about how interrupts are delivered from the PCI devices to the
interrupt controller.

 Looking at the system.vhd I see that the SBR pin is connected to the
interrupt 0, lines PCI<n>_INT are connected to interrupts 1 to 6, and
the IP2INTC line is connected to interrupt 12. The problem is that I
don't know which line is actually used to deliver the interrupts coming
from the PCI devices. I've tried using all of them, but all attempts
were failed.

 On the PCI base configuration provided by Xilinx I see that they use a
IPcore called misc_logic, which seems to merge all interrupts lines
(some of them are even inverted) into a single one, but I do not
understand why should I do it. Could anybody explain me the actual
reason?

 It would be great if somebody could explain me how does the whole
thing related to the interrupts from PCI devices works. At least I
would like to know which interrupt line is supposed to be asserted when
the ethernet card, the USB bridge (or whatever PCI device you prefer)
launches an interrupt, so then I can try looking into the linux kernel
code to fix the problem knowing that the interrupt line is being
asserted.

Thanks in advance and best regards,
 Isaac

Hi Isaac,

igelado@gmail.com wrote:

> On the PCI base configuration provided by Xilinx I see that they use a > IPcore called misc_logic, which seems to merge all interrupts lines > (some of them are even inverted) into a single one, but I do not > understand why should I do it. Could anybody explain me the actual > reason? > > It would be great if somebody could explain me how does the whole > thing related to the interrupts from PCI devices works. At least I > would like to know which interrupt line is supposed to be asserted when > the ethernet card, the USB bridge (or whatever PCI device you prefer) > launches an interrupt, so then I can try looking into the linux kernel > code to fix the problem knowing that the interrupt line is being > asserted.
As you are seeing, the approach is to merge all of the PCI interrupt lines into a single interrupt signal, which then feeds into the OPB interrupt controller (OPB_INTC). All device drivers request the same IRQ line (whichever it is on the OPB_INTC). When any PCI device raises an interrupt, it triggers the merged IRQ signal, and raises the interrupt. The kernel iterates through a linked list of drivers that are all registered on that one line. Each driver's IRQ handler queries its device, to see if it is responsible for the interrupt condition. If so then it does its thing. This process is documented in Chapter 4 of Bovet and Cesati's excellent "Understanding the Linux kernel". Doing it this way makes life a bit simpler from the kernel side, simply knowing that all PCI devices will appear on the same IRQ line makes life easier, at a modest increase in interrupt latency. Regards, John
Hi,
 I made a hardware module which gets as input all the PCI interrupt
lines and merges them into a single one. The merge is done in the same
way the misc_logic module of the Xilinx base configuration, it means
all inputs are inverted except the SBR one.

 When using this hardware configuration the kernel is unable to boot. I
think due to the interrupt line is always asserted. Does anybody know
the correct way of merging the interrupt signals?

Regards,
 Isaac

Isaac,

igelado@gmail.com wrote:

> I made a hardware module which gets as input all the PCI interrupt > lines and merges them into a single one. The merge is done in the same > way the misc_logic module of the Xilinx base configuration, it means > all inputs are inverted except the SBR one. > > When using this hardware configuration the kernel is unable to boot. I > think due to the interrupt line is always asserted. Does anybody know > the correct way of merging the interrupt signals?
I assume you just copied the fragment from the misc_logic core, that does the interrupt signal merging? Is the merged signal still connected to the same port on the interrupt controller? John
No, I did not, because the misc_logic is written in Verilog and I have
no idea about Verilog, so I "ported" that piece of code to VHDL. The
code in Verilog is as follows:

always @(posedge clk)
 pci_int_or <= (~pci_inta) | (~pci_intb) |  (~pci_intc)  |  (~pci_intd)
| (~pci_inte) | (~pci_intf) | (~pci_core_intr_a) | (sbr_int);

Which I wrote in VHDL as:
process(clk)
 if(clk'event and clk='1') then
  pci_merge <= pci_sbr or not pci_inta or not pci_intb or not pci_inc
or not pci_ind or not pci_inte or not pci_intf ot not pci_core_intr_a;
 end if;
end process;

The pci_merge signal is connected to the interrupt zero pin, which is
the same that is using in the base configuration provided by Xilinx. I
have also modified the xparameters_ml300.h generated by the EDK in
order to get the linux kernel using that interrupt for all PCI devices.

I think both codes are doing the same, aren't they? However, the base
configuration from Xilinix is using a really old opb2pci bridge (if I
try using it I get an error because it is deprecated) and I'm using the
latest one. Both cores have the same interrupt lines, but I do not know
if they are compatible.

My first guess was that maybe in the current core the pci_int<a> lines
are active in high, so instead of merging the signals inverting the
pci_int<a> and making a logical or I didn't invert them. Unfortunately
the result was the same :(.  Right now I am merging all the signals as
shown on the previous VHDL code but not using pci_inte, pci_intf
pci_core_intr_a, which I think there are not used by my hardware
configuration. With this schema only the PCI-IDE controller is getting
interrupts.

I will try to also include the pci_core_intr_a in the merge logic. But,
as you can guess, I am trying different configurations without knowing
what I am really doing.

Isaac

Hi Isaac,

igelado@gmail.com wrote:
> No, I did not, because the misc_logic is written in Verilog and I have > no idea about Verilog, so I "ported" that piece of code to VHDL. The > code in Verilog is as follows: > > always @(posedge clk) > pci_int_or <= (~pci_inta) | (~pci_intb) | (~pci_intc) | (~pci_intd) > | (~pci_inte) | (~pci_intf) | (~pci_core_intr_a) | (sbr_int); > > Which I wrote in VHDL as: > process(clk) > if(clk'event and clk='1') then > pci_merge <= pci_sbr or not pci_inta or not pci_intb or not pci_inc > or not pci_ind or not pci_inte or not pci_intf ot not pci_core_intr_a; > end if; > end process;
I'm not much of a verilog guy either, but your translation looks ok to me.
> I think both codes are doing the same, aren't they? However, the base > configuration from Xilinix is using a really old opb2pci bridge (if I > try using it I get an error because it is deprecated) and I'm using the > latest one. Both cores have the same interrupt lines, but I do not know > if they are compatible. > > My first guess was that maybe in the current core the pci_int<a> lines > are active in high, so instead of merging the signals inverting the > pci_int<a> and making a logical or I didn't invert them. Unfortunately > the result was the same :(. Right now I am merging all the signals as > shown on the previous VHDL code but not using pci_inte, pci_intf > pci_core_intr_a, which I think there are not used by my hardware > configuration. With this schema only the PCI-IDE controller is getting > interrupts.
I'm not sure what's going on here. Looking back to your first post on this topic, you mentioned that in Xilinx's design there were multiple interrupt signals connected: "Looking at the system.vhd I see that the SBR pin is connected to the interrupt 0, lines PCI<n>_INT are connected to interrupts 1 to 6, and the IP2INTC line is connected to interrupt 12. The problem is that I don't know which line is actually used to deliver the interrupts coming from the PCI devices. I've tried using all of them, but all attempts were failed." This contradicts the idea of merged interrupts that we've discussed subsequently. Maybe there's a clue in there somewhere? Sorry I can't be more help, this stuff is fiddly and probably won't work at all until you get it exactly right. ChipScope might help, to at least see what's going on inside with the various interrupt signals etc. Also, the time-honoured tradition of peppering your kernel with printk() calls should not be overlooked. Sometimes that's easier than trying to debug/single step the thing, particularly where interrupts are involved. The "Understanding the Linux kernel" book, and also "Linux Device Drivers" will give you pointers on where in the source you should be looking. Regards, John