FPGARelated.com
Forums

Image Sensor Interface.

Started by ertw June 22, 2008
MikeWhy wrote:
(snip regarding Nyquist and image sensors)

> Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing > is a spatial resolution problem, not a spectral aliasing (Nyquist) issue.
It isn't usually as bad as audio, but an image with a very high spatial frequency can alias on on image sensor. (Usually called Moire for images. Aliasing can also cause color effects based on the pattern of the color filters on the sensor.) http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09 -- glen
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message 
news:9Nednd9t4eMPDP3VnZ2dnUVZ_jmdnZ2d@comcast.com...
> MikeWhy wrote: > (snip regarding Nyquist and image sensors) > >> Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing is >> a spatial resolution problem, not a spectral aliasing (Nyquist) issue. > > It isn't usually as bad as audio, but an image with a very > high spatial frequency can alias on on image sensor. > (Usually called Moire for images. Aliasing can also cause > color effects based on the pattern of the color filters > on the sensor.) > > http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09
Sure, I've clicked the shutter a few times. I was even around when Sigma splatted in the market with the Foveon sensor. All the same, Bayer aliasing isn't related to Nyquist aliasing and sampling frequency. The OP needn't concern himself with Nyquist considerations. Yes?
On Tue, 24 Jun 2008 02:11:47 -0500, "MikeWhy" wrote:

> All the same, Bayer aliasing >isn't related to Nyquist aliasing and sampling frequency. The OP needn't >concern himself with Nyquist considerations. Yes?
Obviously, from a techology and signal-processing point of view they live in different worlds. But I don't really see what's so different between thinking about spatial frequency and thinking about temporal frequency. But then, part of my problem is that I learned how to think about the frequency/time or spatial-frequency/distance duality not through engineering, but physics: if I want to understand what a convolution is doing, my first resort even today is to think about optical transforms. Absolutely agreed that the OP probably has no control at all over the spatial bandwidth (MTF) and spatial sampling concerns of his image sensor/lens combination. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.
> The output is actually 4 differential signals (one for each column) > meaning I will need four ADCs (all four video outputs signals come out=
> simultaneously). The resolution that I want is 16 bits.
Sure, you want 16 bits, but what is the signal to noise ratio that the = = sensor actually delivers ? If your sensor only has say, 10 bits of = precision, then your expensive 16 bits ADCs are wasted.
> Now, that means I have four parallel channels of 16 bits coming into > the FPGA every 25 ns that I need to store somewhere. The total data > per frame is: > (320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or > 160 KBytes / frame. > > Do you think I can store that much within a xilinx FPGA. I am trying > to do 30 frames per seccond which means I have roughly 33 ms per frame=
> but using 40 MHz clock each frame can be read out in 512 microseconds > with a whole lot of dead time after each frame (unless I can run the > sensor at a slower pixel clock).
Well your pixel clock is going to depend on the bandwidth and settling = = time of the analog path. Your settling time requirements depend on the = number of bits you actually want. If you want more precision it always = takes longer to settle. Don't forget this in the design of your analog = path.
> The idea is to transfer data over the pci bus to the computer and I > cant go over 133 Meg transfers per second. Since I am reading 4
actually it's 133 megabytes/s, but 33 megatransfers/s since one transfe= r = is one 32 bit word ie 4 bytes
> channels @ 40 MHz that works out to be 160 Mbits per second so not > possible to transfer the data on fly over the bus (unless I am > misunderstanding something). Is there a way to transfer data on the > fly over the pci bus other than slowing the pixel clock ?
you can either use a fast pixel clock and a large FIFO, or a slower pix= el = clock and no FIFO. But if you only want 30 fps, which is quite small, and a small resoluti= on = of 320x240, this is only 2.3 million pixels per second. So you can use a= = pixel clock of (say) 1 MHz, with outputs 4 pixels every microsecond as y= ou = said, and then a 4-input muxed ADC, instead of 4 ADCs (much cheaper) = taking 4 million samples/s. In this case you are twice as fast as = necessary which will allow you to use up to half the frame time as = exposure time on your sensor. If you need longer exposures you will need= a = faster pixel clock to allow more time for exposure and less time for dat= a = handling. Now if you want to make high-speed recording (like 300 fps) to take = videos of bullets exploding tomatoes you'll need to use the fastest pixe= l = clock you can get and also very powerful lights. But if you only need a = = slow 30 fps you don't need to use expensive analog parts and ADCs.
> Or how can I effeciently transfer the data data over the bus (even if > I have to store and then use a slower clock to transfer the data out).=
To be efficient you need burst transfers so you will always need some = form of FIFO somewhere, and DMA. Note that since your throughput is quite small you could use USB instea= d = of PCI which would allow more freedom in locating the camera further fro= m = the computer itself. What do you want to do with this camera ?
MikeWhy wrote:
(snip on aliasing and imaging)

>> http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09
> Sure, I've clicked the shutter a few times. I was even around when Sigma > splatted in the market with the Foveon sensor. All the same, Bayer > aliasing isn't related to Nyquist aliasing and sampling frequency. The > OP needn't concern himself with Nyquist considerations. Yes?
Bayer aliasing and sampling (spatial) frequency are exactly related to Nyquist aliasing, however the OP was asking about the time domain signal coming out of a CCD array. That signal has already been sampled and Nyquist should not be a consideration. (Unless one is sampling the CCD output at a lower frequency.) The OP didn't explain the optical system at all, so I can't say if that is a concern or not. -- glen
On Jun 24, 2:01=A0pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> MikeWhy wrote: > > (snip on aliasing and imaging) > > >>http://www.nikonians.org/nikon/d200/nikon_d200_review_2.html#aa95cf09 > > Sure, I've clicked the shutter a few times. I was even around when Sigma=
> > splatted in the market with the Foveon sensor. All the same, Bayer > > aliasing isn't related to Nyquist aliasing and sampling frequency. The > > OP needn't concern himself with Nyquist considerations. Yes? > > Bayer aliasing and sampling (spatial) frequency are exactly related > to Nyquist aliasing, however the OP was asking about the time domain > signal coming out of a CCD array. =A0That signal has already been > sampled and Nyquist should not be a consideration. =A0(Unless one is > sampling the CCD output at a lower frequency.) =A0The OP didn't explain > the optical system at all, so I can't say if that is a concern > or not. > > -- glen
Thanks guys, all the suggestions and explanations have been very helpful! One more question regarding the FIFO inside the FPGA. I am planning to use two 12 bit ADC (4 diff inputs in total) sampling at 40 MHz. Now that means I will have 120 KBytes (960 Kbits) of data per frame to store before I transfer it over the bus at low rate. It seems like a FIFO is the best way to buffer this data and transfer it with a slower clock later or maybe even four different FIFO for each channel (240 Kbits each). Xilinx Spartan-3 XC3S4000 has a total of 1,728Kbits (enough for 960 Kbits per frame) of block RAM (4 RAM columns, 24 RAM blocks per column and 18,432 bits per block RAM) that I guess I can use for FIFOs ? I would like a FIFO of size (20K x 12 bits) =3D 240 Kbits. Can I just instantiate that using the CoreGeneator ? Not sure if I understand all this right ... the XC3S4000 is little bigger for what I need to do in terms of logic ... but then again it seems like the only one with enough block ram for the FIFOs unless I am misunderstanding something. Please advise ... Thanks !
"ertw" <gill81@hotmail.com> wrote in message 
news:0a1b0830-7fc9-466c-8c22-8a5c9555c1b3@m3g2000hsc.googlegroups.com...
One more question regarding the FIFO inside the FPGA. I am planning to
use two 12 bit ADC (4 diff inputs in total) sampling at 40 MHz. Now
that means I will have 120 KBytes (960 Kbits) of data per frame to
store before I transfer it over the bus at low rate. It seems like a
FIFO is the best way to buffer this data and transfer it with a slower
clock later or maybe even four different FIFO for each channel (240
Kbits each).

Xilinx Spartan-3 XC3S4000 has a total of 1,728Kbits (enough for 960
Kbits per frame) of block RAM (4 RAM columns, 24 RAM blocks per column
and 18,432 bits per block RAM) that I guess I can use for FIFOs ?

I would like a FIFO of size (20K x 12 bits) = 240 Kbits. Can I just
instantiate that using the CoreGeneator ?

Not sure if I understand all this right ... the XC3S4000 is little
bigger for what I need to do in terms of logic ... but then again it
seems like the only one with enough block ram for the FIFOs unless I
am misunderstanding something. Please advise ...

==========
Yikes. The 4000 is a bit big if that's all you're doing. I guess this is the 
fun part of the job, and I wouldn't dream of depriving you of it. :) The 
choices are to slow it down; store it off chip; or suck it up and get the 
big chip for its block ram. I like using an overly large chip least, but 
only you know the constraints of why so fast and what's possible. 3 ns DDR2 
SDRAM is pretty cheap these days ($10 single quantity for 16Mx16 bit).

So, I take it the device doesn't exist yet?