On Jun 22, 10:43=A0am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
> On Sun, 22 Jun 2008 07:01:10 -0700 (PDT), ertw <gil...@hotmail.com>
> wrote:
>
> >Hi, I am planning to read an image sensor using an FPGA but I am a
> >little confused about a bunch of things. Hopefully someone here can
> >help me understand the following things:
>
> >Note: The image sensor output is an ANALOG signal. Datasheet says that
> >the READOUT clock is 40MHz.
>
> It somewhat depends on whereabouts in the sensor's output
> signal processing chain you expect to pick up the signal.
> Is this a raw sensor chip that you have? =A0Is it hiding
> behind a sensor drive/control chipset? =A0Is it already
> packaged, supplying standard composite video output?
>
>
>
> >1. How is reading of an image sensor using an ADC different then
> >reading a random analog signal using an ADC?
>
> You're right to question this. =A0Of course, at base it isn't -
> it's just a matter of sampling an analog signal. =A0But the image
> sensor has some slightly strange properties. =A0First off, the
> analog signal has already been through some kind of sample-
> and-hold step. =A0In an idealised world, with a 40 MHz readout
> clock, you would expect to see the analog signal "flat" for
> 25ns while it delivers the sampled signal for one pixel,
> and then make a step change to a different voltage for the
> next pixel which again would last for 25ns, and so on.
>
> In the real world, of course, it ain't that simple. =A0First,
> you have the limited bandwidth of the analog signal processing
> chain (inside the image sensor and its support chips) which will
> cause this idealised stair-step waveform to have all manner of
> non-ideal characteristics. =A0Indeed, if the output signal is
> designed for use as an analog composite video signal, then
> it will probably have been through a low-pass filter to remove
> most of the staircase-like behaviour. =A0Second, even before
> the analog signal made it as far as the staircase waveform
> I described, there will be a lot of business about sampling
> and resetting the image sensor's output structures.
>
> In summary, all of this stuff says that you should take
> care to sample the analog signal exactly when the camera
> manufacturer tells you to sample it, with the 40 MHz sample
> clock that they've so thoughtfully provided (I hope!).
>
> > =A0 =A0 =A0And the amount of data or memory required can be calculated
> >using:
> > =A0 =A0 =A0Sampling rate x ADC resolution
>
> > =A0 =A0- This is different in case of an image sensor
>
> Of course it is not different. =A0If you get 16 bits, 40M times
> per second, then you have 640Mbit/sec to handle.
>
> > Do I use an ADC running at 40 MSamples/second since the
> > pixel output 40 MHz ?
>
> If the camera manufacturer gives you a "sampled analog"
> output and a sampling clock, then yes. =A0On the other hand,
> if all you have is a composite analog video output with
> no sampling clock, you are entirely free to choose your
> sampling rate - bearing in mind that it may not match
> up with pixels on the camera, and therefore you are
> trusting the camera's low-pass filter to do a good job
> of the interpolation for you.
>
> > =A0 =A0 =A0How do I calculate the required memory ?
>
> > =A0 =A0 =A0Is it simply 40 MS/s x 16 bits (adc resolution) for each pix=
el
>
> eh? =A0
>
> >or just 16 bits per pixel ?
>
> Only the very highest quality cameras give an output that's worth
> digitising to 16 bit precision. =A010 bits should be enough for
> anyone; 8 bits is often adequate for low-spec applications such
> as webcams and surveillance.
>
> > =A0 =A0 =A0If each frame is 320 x 256 then data per frame is - (320x256=
) x
> >16 bits, why not multiple this by 40 MS/s like
> > =A0 =A0 =A0you would for any other random analog signal ?
>
> I have no idea what you mean. =A040 MHz is the *pixel* rate. =A0Let's
> follow that through:
>
> =A0 40 MHz, 320 pixels on a line - that's 8 microseconds per line.
> =A0 But don't forget to add the extra 2us or thereabouts that will
> =A0 be needed for horizontal synch or whatever. =A0Let's guess 10us
> =A0 per line.
>
> =A0 256 lines per image, 10us per line, that's 2.56 milliseconds per
> =A0 image - but, again, we need to add a margin for frame synch.
> =A0 Perhaps 3ms per image.
>
> =A0 Wow, you're getting 330 images per second - that's way fast.
>
> But whatever you do, if you sample your ADC at 40 MHz then you
> get 40 million samples per second!
>
> ~~~~~~~~~~~~~~~~~~~~~~~
>
> More questions:
>
> What about colour? =A0Or is this a monochrome sensor?
>
> Do you get explicit frame and line synch signals from the
> camera, or must you extract them from the composite
> video signal?
>
> Must you create the camera's internal line, pixel and field
> clocks yourself in the FPGA, or does the camera already have
> clock generators in its support circuitry?
>
> ~~~~~~~~~~~~~~~~~~~~~~
>
> You youngsters have it so easy :-) =A0The first CCD camera
> controller I did had about 60 MSI chips in it, an unholy
> mess of PALs, TTL, CMOS, special-purpose level shifters
> for the camera clocks (TSC426, anyone?), sample-and-hold
> and analog switch devices to capture the camera output,
> some wild high-speed video amplifiers (LM533)... =A0And
> the imaging device itself, from Fairchild IIRC, was only
> NTSC-video resolution and cost around $300. =A0Things have
> moved on a little in the last quarter-century...
> --
> Jonathan Bromley, Consultant
>
> DOULOS - Developing Design Know-how
> VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services
>
> Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
> jonathan.brom...@MYCOMPANY.comhttp://www.MYCOMPANY.com
>
> The contents of this message may contain personal views which
> are not the views of Doulos Ltd., unless specifically stated.
Guys, Thanks a lot for the help. Jonathan your explanation was
great ...
Answers to the questions you asked -
- Its a monochrome sensor
- I do get explicit frame and line signals from the sensor
- Sensor does not have any clock generating circuitary (I have to
provide the clock, or pixel clock to the sensor, not sure if I was
clear about that in the previous post).
I have a few more questions regarding data storage and processing (I
think the readout from the sensor is a little clear in my head now).
The sensor is a packaged Integrated circuit with processing applied to
the final stage analog signal (thats where I am planing to read it
using an ADC).
The output is actually 4 differential signals (one for each column)
meaning I will need four ADCs (all four video outputs signals come out
simultaneously). The resolution that I want is 16 bits.
Now, that means I have four parallel channels of 16 bits coming into
the FPGA every 25 ns that I need to store somewhere. The total data
per frame is:
(320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or
160 KBytes / frame.
Do you think I can store that much within a xilinx FPGA. I am trying
to do 30 frames per seccond which means I have roughly 33 ms per frame
but using 40 MHz clock each frame can be read out in 512 microseconds
with a whole lot of dead time after each frame (unless I can run the
sensor at a slower pixel clock).
The idea is to transfer data over the pci bus to the computer and I
cant go over 133 Meg transfers per second. Since I am reading 4
channels @ 40 MHz that works out to be 160 Mbits per second so not
possible to transfer the data on fly over the bus (unless I am
misunderstanding something). Is there a way to transfer data on the
fly over the pci bus other than slowing the pixel clock ?
Or how can I effeciently transfer the data data over the bus (even if
I have to store and then use a slower clock to transfer the data out).