Reply by MikeWhy June 25, 20082008-06-25
"ertw" <gill81@hotmail.com> wrote in message 
news:0a1b0830-7fc9-466c-8c22-8a5c9555c1b3@m3g2000hsc.googlegroups.com...
One more question regarding the FIFO inside the FPGA. I am planning to
use two 12 bit ADC (4 diff inputs in total) sampling at 40 MHz. Now
that means I will have 120 KBytes (960 Kbits) of data per frame to
store before I transfer it over the bus at low rate. It seems like a
FIFO is the best way to buffer this data and transfer it with a slower
clock later or maybe even four different FIFO for each channel (240
Kbits each).

Xilinx Spartan-3 XC3S4000 has a total of 1,728Kbits (enough for 960
Kbits per frame) of block RAM (4 RAM columns, 24 RAM blocks per column
and 18,432 bits per block RAM) that I guess I can use for FIFOs ?

I would like a FIFO of size (20K x 12 bits) = 240 Kbits. Can I just
instantiate that using the CoreGeneator ?

Not sure if I understand all this right ... the XC3S4000 is little
bigger for what I need to do in terms of logic ... but then again it
seems like the only one with enough block ram for the FIFOs unless I
am misunderstanding something. Please advise ...

==========
Yikes. The 4000 is a bit big if that's all you're doing. I guess this is the 
fun part of the job, and I wouldn't dream of depriving you of it. :) The 
choices are to slow it down; store it off chip; or suck it up and get the 
big chip for its block ram. I like using an overly large chip least, but 
only you know the constraints of why so fast and what's possible. 3 ns DDR2 
SDRAM is pretty cheap these days ($10 single quantity for 16Mx16 bit).

So, I take it the device doesn't exist yet?


Reply by ertw June 25, 20082008-06-25
On Jun 24, 2:01=A0pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> MikeWhy wrote: > > (snip on aliasing and imaging) > > >>http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09 > > Sure, I've clicked the shutter a few times. I was even around when Sigma=
> > splatted in the market with the Foveon sensor. All the same, Bayer > > aliasing isn't related to Nyquist aliasing and sampling frequency. The > > OP needn't concern himself with Nyquist considerations. Yes? > > Bayer aliasing and sampling (spatial) frequency are exactly related > to Nyquist aliasing, however the OP was asking about the time domain > signal coming out of a CCD array. =A0That signal has already been > sampled and Nyquist should not be a consideration. =A0(Unless one is > sampling the CCD output at a lower frequency.) =A0The OP didn't explain > the optical system at all, so I can't say if that is a concern > or not. > > -- glen
Thanks guys, all the suggestions and explanations have been very helpful! One more question regarding the FIFO inside the FPGA. I am planning to use two 12 bit ADC (4 diff inputs in total) sampling at 40 MHz. Now that means I will have 120 KBytes (960 Kbits) of data per frame to store before I transfer it over the bus at low rate. It seems like a FIFO is the best way to buffer this data and transfer it with a slower clock later or maybe even four different FIFO for each channel (240 Kbits each). Xilinx Spartan-3 XC3S4000 has a total of 1,728Kbits (enough for 960 Kbits per frame) of block RAM (4 RAM columns, 24 RAM blocks per column and 18,432 bits per block RAM) that I guess I can use for FIFOs ? I would like a FIFO of size (20K x 12 bits) =3D 240 Kbits. Can I just instantiate that using the CoreGeneator ? Not sure if I understand all this right ... the XC3S4000 is little bigger for what I need to do in terms of logic ... but then again it seems like the only one with enough block ram for the FIFOs unless I am misunderstanding something. Please advise ... Thanks !
Reply by glen herrmannsfeldt June 24, 20082008-06-24
MikeWhy wrote:
(snip on aliasing and imaging)

>> http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09
> Sure, I've clicked the shutter a few times. I was even around when Sigma > splatted in the market with the Foveon sensor. All the same, Bayer > aliasing isn't related to Nyquist aliasing and sampling frequency. The > OP needn't concern himself with Nyquist considerations. Yes?
Bayer aliasing and sampling (spatial) frequency are exactly related to Nyquist aliasing, however the OP was asking about the time domain signal coming out of a CCD array. That signal has already been sampled and Nyquist should not be a consideration. (Unless one is sampling the CCD output at a lower frequency.) The OP didn't explain the optical system at all, so I can't say if that is a concern or not. -- glen
Reply by PFC June 24, 20082008-06-24
> The output is actually 4 differential signals (one for each column) > meaning I will need four ADCs (all four video outputs signals come out=
> simultaneously). The resolution that I want is 16 bits.
Sure, you want 16 bits, but what is the signal to noise ratio that the = = sensor actually delivers ? If your sensor only has say, 10 bits of = precision, then your expensive 16 bits ADCs are wasted.
> Now, that means I have four parallel channels of 16 bits coming into > the FPGA every 25 ns that I need to store somewhere. The total data > per frame is: > (320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or > 160 KBytes / frame. > > Do you think I can store that much within a xilinx FPGA. I am trying > to do 30 frames per seccond which means I have roughly 33 ms per frame=
> but using 40 MHz clock each frame can be read out in 512 microseconds > with a whole lot of dead time after each frame (unless I can run the > sensor at a slower pixel clock).
Well your pixel clock is going to depend on the bandwidth and settling = = time of the analog path. Your settling time requirements depend on the = number of bits you actually want. If you want more precision it always = takes longer to settle. Don't forget this in the design of your analog = path.
> The idea is to transfer data over the pci bus to the computer and I > cant go over 133 Meg transfers per second. Since I am reading 4
actually it's 133 megabytes/s, but 33 megatransfers/s since one transfe= r = is one 32 bit word ie 4 bytes
> channels @ 40 MHz that works out to be 160 Mbits per second so not > possible to transfer the data on fly over the bus (unless I am > misunderstanding something). Is there a way to transfer data on the > fly over the pci bus other than slowing the pixel clock ?
you can either use a fast pixel clock and a large FIFO, or a slower pix= el = clock and no FIFO. But if you only want 30 fps, which is quite small, and a small resoluti= on = of 320x240, this is only 2.3 million pixels per second. So you can use a= = pixel clock of (say) 1 MHz, with outputs 4 pixels every microsecond as y= ou = said, and then a 4-input muxed ADC, instead of 4 ADCs (much cheaper) = taking 4 million samples/s. In this case you are twice as fast as = necessary which will allow you to use up to half the frame time as = exposure time on your sensor. If you need longer exposures you will need= a = faster pixel clock to allow more time for exposure and less time for dat= a = handling. Now if you want to make high-speed recording (like 300 fps) to take = videos of bullets exploding tomatoes you'll need to use the fastest pixe= l = clock you can get and also very powerful lights. But if you only need a = = slow 30 fps you don't need to use expensive analog parts and ADCs.
> Or how can I effeciently transfer the data data over the bus (even if > I have to store and then use a slower clock to transfer the data out).=
To be efficient you need burst transfers so you will always need some = form of FIFO somewhere, and DMA. Note that since your throughput is quite small you could use USB instea= d = of PCI which would allow more freedom in locating the camera further fro= m = the computer itself. What do you want to do with this camera ?
Reply by Jonathan Bromley June 24, 20082008-06-24
On Tue, 24 Jun 2008 02:11:47 -0500, "MikeWhy" wrote:

> All the same, Bayer aliasing >isn't related to Nyquist aliasing and sampling frequency. The OP needn't >concern himself with Nyquist considerations. Yes?
Obviously, from a techology and signal-processing point of view they live in different worlds. But I don't really see what's so different between thinking about spatial frequency and thinking about temporal frequency. But then, part of my problem is that I learned how to think about the frequency/time or spatial-frequency/distance duality not through engineering, but physics: if I want to understand what a convolution is doing, my first resort even today is to think about optical transforms. Absolutely agreed that the OP probably has no control at all over the spatial bandwidth (MTF) and spatial sampling concerns of his image sensor/lens combination. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.
Reply by MikeWhy June 24, 20082008-06-24
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message 
news:9Nednd9t4eMPDP3VnZ2dnUVZ_jmdnZ2d@comcast.com...
> MikeWhy wrote: > (snip regarding Nyquist and image sensors) > >> Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing is >> a spatial resolution problem, not a spectral aliasing (Nyquist) issue. > > It isn't usually as bad as audio, but an image with a very > high spatial frequency can alias on on image sensor. > (Usually called Moire for images. Aliasing can also cause > color effects based on the pattern of the color filters > on the sensor.) > > http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09
Sure, I've clicked the shutter a few times. I was even around when Sigma splatted in the market with the Foveon sensor. All the same, Bayer aliasing isn't related to Nyquist aliasing and sampling frequency. The OP needn't concern himself with Nyquist considerations. Yes?
Reply by glen herrmannsfeldt June 24, 20082008-06-24
MikeWhy wrote:
(snip regarding Nyquist and image sensors)

> Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing > is a spatial resolution problem, not a spectral aliasing (Nyquist) issue.
It isn't usually as bad as audio, but an image with a very high spatial frequency can alias on on image sensor. (Usually called Moire for images. Aliasing can also cause color effects based on the pattern of the color filters on the sensor.) http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09 -- glen
Reply by MikeWhy June 23, 20082008-06-23
"ertw" <gill81@hotmail.com> wrote in message 
news:812c4ac9-d1cf-4a1d-a66b-807aeb0c7359@m45g2000hsb.googlegroups.com...
Now, that means I have four parallel channels of 16 bits coming into
the FPGA every 25 ns that I need to store somewhere. The total data
per frame is:
(320 x 256) x 16 bits = 1310720 bits/frame OR 163840 Bytes/frame or
160 KBytes / frame.

Do you think I can store that much within a xilinx FPGA. I am trying
to do 30 frames per seccond which means I have roughly 33 ms per frame
but using 40 MHz clock each frame can be read out in 512 microseconds
with a whole lot of dead time after each frame (unless I can run the
sensor at a slower pixel clock).

=========
A block RAM FIFO comes to mind. Maybe even 4 of them, one for each column 
stream. Search the docs for BRAM.

The frames are small enough, and 33ms is long enough that you likely won't 
need to double buffer. For example, buffering it in larger, slower memory to 
allow for bus contention.


Reply by MikeWhy June 23, 20082008-06-23
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message 
news:xcadnWI25-LcecLVnZ2dnUVZ_tTinZ2d@comcast.com...
> MikeWhy wrote: > (snip) > >> Nyquist relates to sinusoids and periodicity in the signal. The sampling >> period as it relates to Nyquist with your image sensor is the frame rate, >> not the pixel clock/ADC sample rate. The two are not related in a >> meaningful way. Fuhget about it. > > Yes, Nyquist is completely unrelated to the signal coming out > of an image sensor, but it is important in what goes in. > > Specifically, the image sensor samples an analog (image) in > two dimensions, and, for the result to be correct the image itself > must not have spatial frequencies at the sensor surface higher > than half the pixel spacing. Sometimes one trusts the lens to > do that, others an optical low pass filter is used.
Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing is a spatial resolution problem, not a spectral aliasing (Nyquist) issue.
Reply by ertw June 23, 20082008-06-23
On Jun 22, 10:43=A0am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
> On Sun, 22 Jun 2008 07:01:10 -0700 (PDT), ertw <gil...@hotmail.com> > wrote: > > >Hi, I am planning to read an image sensor using an FPGA but I am a > >little confused about a bunch of things. Hopefully someone here can > >help me understand the following things: > > >Note: The image sensor output is an ANALOG signal. Datasheet says that > >the READOUT clock is 40MHz. > > It somewhat depends on whereabouts in the sensor's output > signal processing chain you expect to pick up the signal. > Is this a raw sensor chip that you have? =A0Is it hiding > behind a sensor drive/control chipset? =A0Is it already > packaged, supplying standard composite video output? > > > > >1. How is reading of an image sensor using an ADC different then > >reading a random analog signal using an ADC? > > You're right to question this. =A0Of course, at base it isn't - > it's just a matter of sampling an analog signal. =A0But the image > sensor has some slightly strange properties. =A0First off, the > analog signal has already been through some kind of sample- > and-hold step. =A0In an idealised world, with a 40 MHz readout > clock, you would expect to see the analog signal "flat" for > 25ns while it delivers the sampled signal for one pixel, > and then make a step change to a different voltage for the > next pixel which again would last for 25ns, and so on. > > In the real world, of course, it ain't that simple. =A0First, > you have the limited bandwidth of the analog signal processing > chain (inside the image sensor and its support chips) which will > cause this idealised stair-step waveform to have all manner of > non-ideal characteristics. =A0Indeed, if the output signal is > designed for use as an analog composite video signal, then > it will probably have been through a low-pass filter to remove > most of the staircase-like behaviour. =A0Second, even before > the analog signal made it as far as the staircase waveform > I described, there will be a lot of business about sampling > and resetting the image sensor's output structures. > > In summary, all of this stuff says that you should take > care to sample the analog signal exactly when the camera > manufacturer tells you to sample it, with the 40 MHz sample > clock that they've so thoughtfully provided (I hope!). > > > =A0 =A0 =A0And the amount of data or memory required can be calculated > >using: > > =A0 =A0 =A0Sampling rate x ADC resolution > > > =A0 =A0- This is different in case of an image sensor > > Of course it is not different. =A0If you get 16 bits, 40M times > per second, then you have 640Mbit/sec to handle. > > > Do I use an ADC running at 40 MSamples/second since the > > pixel output 40 MHz ? > > If the camera manufacturer gives you a "sampled analog" > output and a sampling clock, then yes. =A0On the other hand, > if all you have is a composite analog video output with > no sampling clock, you are entirely free to choose your > sampling rate - bearing in mind that it may not match > up with pixels on the camera, and therefore you are > trusting the camera's low-pass filter to do a good job > of the interpolation for you. > > > =A0 =A0 =A0How do I calculate the required memory ? > > > =A0 =A0 =A0Is it simply 40 MS/s x 16 bits (adc resolution) for each pix=
el
> > eh? =A0 > > >or just 16 bits per pixel ? > > Only the very highest quality cameras give an output that's worth > digitising to 16 bit precision. =A010 bits should be enough for > anyone; 8 bits is often adequate for low-spec applications such > as webcams and surveillance. > > > =A0 =A0 =A0If each frame is 320 x 256 then data per frame is - (320x256=
) x
> >16 bits, why not multiple this by 40 MS/s like > > =A0 =A0 =A0you would for any other random analog signal ? > > I have no idea what you mean. =A040 MHz is the *pixel* rate. =A0Let's > follow that through: > > =A0 40 MHz, 320 pixels on a line - that's 8 microseconds per line. > =A0 But don't forget to add the extra 2us or thereabouts that will > =A0 be needed for horizontal synch or whatever. =A0Let's guess 10us > =A0 per line. > > =A0 256 lines per image, 10us per line, that's 2.56 milliseconds per > =A0 image - but, again, we need to add a margin for frame synch. > =A0 Perhaps 3ms per image. > > =A0 Wow, you're getting 330 images per second - that's way fast. > > But whatever you do, if you sample your ADC at 40 MHz then you > get 40 million samples per second! > > ~~~~~~~~~~~~~~~~~~~~~~~ > > More questions: > > What about colour? =A0Or is this a monochrome sensor? > > Do you get explicit frame and line synch signals from the > camera, or must you extract them from the composite > video signal? > > Must you create the camera's internal line, pixel and field > clocks yourself in the FPGA, or does the camera already have > clock generators in its support circuitry? > > ~~~~~~~~~~~~~~~~~~~~~~ > > You youngsters have it so easy :-) =A0The first CCD camera > controller I did had about 60 MSI chips in it, an unholy > mess of PALs, TTL, CMOS, special-purpose level shifters > for the camera clocks (TSC426, anyone?), sample-and-hold > and analog switch devices to capture the camera output, > some wild high-speed video amplifiers (LM533)... =A0And > the imaging device itself, from Fairchild IIRC, was only > NTSC-video resolution and cost around $300. =A0Things have > moved on a little in the last quarter-century... > -- > Jonathan Bromley, Consultant > > DOULOS - Developing Design Know-how > VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services > > Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK > jonathan.brom...@MYCOMPANY.comhttp://www.MYCOMPANY.com > > The contents of this message may contain personal views which > are not the views of Doulos Ltd., unless specifically stated.
Guys, Thanks a lot for the help. Jonathan your explanation was great ... Answers to the questions you asked - - Its a monochrome sensor - I do get explicit frame and line signals from the sensor - Sensor does not have any clock generating circuitary (I have to provide the clock, or pixel clock to the sensor, not sure if I was clear about that in the previous post). I have a few more questions regarding data storage and processing (I think the readout from the sensor is a little clear in my head now). The sensor is a packaged Integrated circuit with processing applied to the final stage analog signal (thats where I am planing to read it using an ADC). The output is actually 4 differential signals (one for each column) meaning I will need four ADCs (all four video outputs signals come out simultaneously). The resolution that I want is 16 bits. Now, that means I have four parallel channels of 16 bits coming into the FPGA every 25 ns that I need to store somewhere. The total data per frame is: (320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or 160 KBytes / frame. Do you think I can store that much within a xilinx FPGA. I am trying to do 30 frames per seccond which means I have roughly 33 ms per frame but using 40 MHz clock each frame can be read out in 512 microseconds with a whole lot of dead time after each frame (unless I can run the sensor at a slower pixel clock). The idea is to transfer data over the pci bus to the computer and I cant go over 133 Meg transfers per second. Since I am reading 4 channels @ 40 MHz that works out to be 160 Mbits per second so not possible to transfer the data on fly over the bus (unless I am misunderstanding something). Is there a way to transfer data on the fly over the pci bus other than slowing the pixel clock ? Or how can I effeciently transfer the data data over the bus (even if I have to store and then use a slower clock to transfer the data out).