Hi, I am planning to read an image sensor using an FPGA but I am a little confused about a bunch of things. Hopefully someone here can help me understand the following things: Note: The image sensor output is an ANALOG signal. Datasheet says that the READOUT clock is 40MHz. 1. How is reading of an image sensor using an ADC different then reading a random analog signal using an ADC? - Any random signal is read using nyquist theorem that is sample the signal @ 2 times the highest frequency. And the amount of data or memory required can be calculated using: Sampling rate x ADC resolution - This is different in case of an image sensor ? Why ? Because each pixel output is an analog signal and all of that signal gets converted into a digital value ? Do I use an ADC running at 40 MSamples/second since the pixel output 40 MHz ? How do I calculate the required memory ? Is it simply 40 MS/s x 16 bits (adc resolution) for each pixel or just 16 bits per pixel ? If each frame is 320 x 256 then data per frame is - (320x256) x 16 bits, why not multiple this by 40 MS/s like you would for any other random analog signal ? Thanks,
Image Sensor Interface.
Started by ●June 22, 2008
Reply by ●June 22, 20082008-06-22
On Jun 22, 10:01=A0am, ertw <gil...@hotmail.com> wrote:> Hi, I am planning to read an image sensor using an FPGA but I am a > little confused about a bunch of things. Hopefully someone here can > help me understand the following things: > > Note: The image sensor output is an ANALOG signal. Datasheet says that > the READOUT clock is 40MHz. > > 1. How is reading of an image sensor using an ADC different then > reading a random analog signal using an ADC? > > =A0 =A0 - Any random signal is read using nyquist theorem that is sample > the signal @ 2 times the highest frequency. > =A0 =A0 =A0 And the amount of data or memory required can be calculated > using: > =A0 =A0 =A0 Sampling rate x ADC resolution > > =A0 =A0 - This is different in case of an image sensor ? Why ? Because > each pixel output is an analog signal and all of > =A0 =A0 =A0 that signal gets converted into a digital value ? Do I use an > ADC running at 40 MSamples/second since the > =A0 =A0 =A0 pixel output 40 MHz ? > =A0 =A0 =A0 How do I calculate the required memory ? > > =A0 =A0 =A0 Is it simply 40 MS/s x 16 bits (adc resolution) for each pixe=l> or just 16 bits per pixel ? > =A0 =A0 =A0 If each frame is 320 x 256 then data per frame is - (320x256)=x> 16 bits, why not multiple this by 40 MS/s like > =A0 =A0 =A0 you would for any other random analog signal ? > > Thanks,Just realized after posting ... is it because for the image sensor I am only reading the amplitude using the ADC ? as opposed to any other random signal where the whole signal is sampled at different intervals ?
Reply by ●June 22, 20082008-06-22
On Sun, 22 Jun 2008 07:01:10 -0700 (PDT), ertw <gill81@hotmail.com> wrote:>Hi, I am planning to read an image sensor using an FPGA but I am a >little confused about a bunch of things. Hopefully someone here can >help me understand the following things: > >Note: The image sensor output is an ANALOG signal. Datasheet says that >the READOUT clock is 40MHz.It somewhat depends on whereabouts in the sensor's output signal processing chain you expect to pick up the signal. Is this a raw sensor chip that you have? Is it hiding behind a sensor drive/control chipset? Is it already packaged, supplying standard composite video output?> >1. How is reading of an image sensor using an ADC different then >reading a random analog signal using an ADC?You're right to question this. Of course, at base it isn't - it's just a matter of sampling an analog signal. But the image sensor has some slightly strange properties. First off, the analog signal has already been through some kind of sample- and-hold step. In an idealised world, with a 40 MHz readout clock, you would expect to see the analog signal "flat" for 25ns while it delivers the sampled signal for one pixel, and then make a step change to a different voltage for the next pixel which again would last for 25ns, and so on. In the real world, of course, it ain't that simple. First, you have the limited bandwidth of the analog signal processing chain (inside the image sensor and its support chips) which will cause this idealised stair-step waveform to have all manner of non-ideal characteristics. Indeed, if the output signal is designed for use as an analog composite video signal, then it will probably have been through a low-pass filter to remove most of the staircase-like behaviour. Second, even before the analog signal made it as far as the staircase waveform I described, there will be a lot of business about sampling and resetting the image sensor's output structures. In summary, all of this stuff says that you should take care to sample the analog signal exactly when the camera manufacturer tells you to sample it, with the 40 MHz sample clock that they've so thoughtfully provided (I hope!).> And the amount of data or memory required can be calculated >using: > Sampling rate x ADC resolution > > - This is different in case of an image sensorOf course it is not different. If you get 16 bits, 40M times per second, then you have 640Mbit/sec to handle.> Do I use an ADC running at 40 MSamples/second since the > pixel output 40 MHz ?If the camera manufacturer gives you a "sampled analog" output and a sampling clock, then yes. On the other hand, if all you have is a composite analog video output with no sampling clock, you are entirely free to choose your sampling rate - bearing in mind that it may not match up with pixels on the camera, and therefore you are trusting the camera's low-pass filter to do a good job of the interpolation for you.> How do I calculate the required memory ? > > Is it simply 40 MS/s x 16 bits (adc resolution) for each pixeleh?>or just 16 bits per pixel ?Only the very highest quality cameras give an output that's worth digitising to 16 bit precision. 10 bits should be enough for anyone; 8 bits is often adequate for low-spec applications such as webcams and surveillance.> If each frame is 320 x 256 then data per frame is - (320x256) x >16 bits, why not multiple this by 40 MS/s like > you would for any other random analog signal ?I have no idea what you mean. 40 MHz is the *pixel* rate. Let's follow that through: 40 MHz, 320 pixels on a line - that's 8 microseconds per line. But don't forget to add the extra 2us or thereabouts that will be needed for horizontal synch or whatever. Let's guess 10us per line. 256 lines per image, 10us per line, that's 2.56 milliseconds per image - but, again, we need to add a margin for frame synch. Perhaps 3ms per image. Wow, you're getting 330 images per second - that's way fast. But whatever you do, if you sample your ADC at 40 MHz then you get 40 million samples per second! ~~~~~~~~~~~~~~~~~~~~~~~ More questions: What about colour? Or is this a monochrome sensor? Do you get explicit frame and line synch signals from the camera, or must you extract them from the composite video signal? Must you create the camera's internal line, pixel and field clocks yourself in the FPGA, or does the camera already have clock generators in its support circuitry? ~~~~~~~~~~~~~~~~~~~~~~ You youngsters have it so easy :-) The first CCD camera controller I did had about 60 MSI chips in it, an unholy mess of PALs, TTL, CMOS, special-purpose level shifters for the camera clocks (TSC426, anyone?), sample-and-hold and analog switch devices to capture the camera output, some wild high-speed video amplifiers (LM533)... And the imaging device itself, from Fairchild IIRC, was only NTSC-video resolution and cost around $300. Things have moved on a little in the last quarter-century... -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.
Reply by ●June 22, 20082008-06-22
"ertw" <gill81@hotmail.com> wrote in message news:cab9c099-e80f-47de-8e75-ca0a0e558ec7@m73g2000hsh.googlegroups.com...> Hi, I am planning to read an image sensor using an FPGA but I am a > little confused about a bunch of things. Hopefully someone here can > help me understand the following things: > > Note: The image sensor output is an ANALOG signal. Datasheet says that > the READOUT clock is 40MHz. > > 1. How is reading of an image sensor using an ADC different then > reading a random analog signal using an ADC? > > - Any random signal is read using nyquist theorem that is sample > the signal @ 2 times the highest frequency. > And the amount of data or memory required can be calculated > using: > Sampling rate x ADC resolutionNyquist relates to sinusoids and periodicity in the signal. The sampling period as it relates to Nyquist with your image sensor is the frame rate, not the pixel clock/ADC sample rate. The two are not related in a meaningful way. Fuhget about it. While reading Proakis, I remember distinctly thinking mathematics is the wrong language to impart an intuitive grasp of some topics for most folks. Discrete time signals would top my list of examples. (How do you take something so conceptually simple, and fill 120 pages with dense prose? Somebody should know the details, in all its glorious minutiae, if only to pass it to the next generation. But how much of it is useful to a practicing engineer?)> > - This is different in case of an image sensor ? Why ? Because > each pixel output is an analog signal and all of > that signal gets converted into a digital value ? Do I use an > ADC running at 40 MSamples/second since the > pixel output 40 MHz ? > How do I calculate the required memory ?Intuitively, you are capturing image frames. The pixel content makes sense only in context of the frame. Calculate the memory required to hold a complete frame.> > Is it simply 40 MS/s x 16 bits (adc resolution) for each pixel > or just 16 bits per pixel ? > If each frame is 320 x 256 then data per frame is - (320x256) x > 16 bits, why not multiple this by 40 MS/s like > you would for any other random analog signal ?Because each sample is at most one pixel, not an entire 320x256x16 frame buffer.
Reply by ●June 23, 20082008-06-23
MikeWhy wrote: (snip)> Nyquist relates to sinusoids and periodicity in the signal. The sampling > period as it relates to Nyquist with your image sensor is the frame > rate, not the pixel clock/ADC sample rate. The two are not related in a > meaningful way. Fuhget about it.Yes, Nyquist is completely unrelated to the signal coming out of an image sensor, but it is important in what goes in. Specifically, the image sensor samples an analog (image) in two dimensions, and, for the result to be correct the image itself must not have spatial frequencies at the sensor surface higher than half the pixel spacing. Sometimes one trusts the lens to do that, others an optical low pass filter is used. -- glen
Reply by ●June 23, 20082008-06-23
On Jun 23, 2:59=A0pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:> MikeWhy wrote: > > (snip) > > > Nyquist relates to sinusoids and periodicity in the signal. The samplin=g> > period as it relates to Nyquist with your image sensor is the frame > > rate, not the pixel clock/ADC sample rate. The two are not related in a > > meaningful way. Fuhget about it. > > Yes, Nyquist is completely unrelated to the signal coming out > of an image sensor, but it is important in what goes in. > > Specifically, the image sensor samples an analog (image) in > two dimensions, and, for the result to be correct the image itself > must not have spatial frequencies at the sensor surface higher > than half the pixel spacing. =A0Sometimes one trusts the lens to > do that, others an optical low pass filter is used. > > -- glenGuys, Thanks a lot for the help. Jonathan your explanation was great ... Answers to the questions you asked - - Its a monochrome sensor - I do get explicit frame and line signals from the sensor - Sensor does not have any clock generating circuitary (I have to provide the clock, or pixel clock to the sensor, not sure if I was clear about that in the previous post). I have a few more questions regarding data storage and processing (I think the readout from the sensor is a little clear in my head now). The sensor is a packaged Integrated circuit with processing applied to the final stage analog signal (thats where I am planing to read it using an ADC). The output is actually 4 differential signals (one for each column) meaning I will need four ADCs (all four video outputs signals come out simultaneously). The resolution that I want is 16 bits. Now, that means I have four parallel channels of 16 bits coming into the FPGA every 25 ns that I need to store somewhere. The total data per frame is: (320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or 160 KBytes / frame. Do you think I can store that much within a xilinx FPGA. I am trying to do 30 frames per seccond which means I have roughly 33 ms per frame but using 40 MHz clock each frame can be read out in 512 microseconds with a whole lot of dead time after each frame (unless I can run the sensor at a slower pixel clock). The idea is to transfer data over the pci bus to the computer and I cant go over 133 Meg transfers per second. Since I am reading 4 channels @ 40 MHz that works out to be 160 Mbits per second so not possible to transfer the data on fly over the bus (unless I am misunderstanding something). Is there a way to transfer data on the fly over the pci bus other than slowing the pixel clock ? Or how can I effeciently transfer the data data over the bus (even if I have to store and then use a slower clock to transfer the data out).
Reply by ●June 23, 20082008-06-23
Guys, Thanks a lot for the help. Jonathan your explanation was great ... Answers to the questions you asked - - Its a monochrome sensor - I do get explicit frame and line signals from the sensor - Sensor does not have any clock generating circuitary (I have to provide the clock, or pixel clock to the sensor, not sure if I was clear about that in the previous post). I have a few more questions regarding data storage and processing (I think the readout from the sensor is a little clear in my head now). The sensor is a packaged Integrated circuit with processing applied to the final stage analog signal (thats where I am planing to read it using an ADC). The output is actually 4 differential signals (one for each column) meaning I will need four ADCs (all four video outputs signals come out simultaneously). The resolution that I want is 16 bits. Now, that means I have four parallel channels of 16 bits coming into the FPGA every 25 ns that I need to store somewhere. The total data per frame is: (320 x 256) x 16 bits = 1310720 bits/frame OR 163840 Bytes/frame or 160 KBytes / frame. Do you think I can store that much within a xilinx FPGA. I am trying to do 30 frames per seccond which means I have roughly 33 ms per frame but using 40 MHz clock each frame can be read out in 512 microseconds with a whole lot of dead time after each frame (unless I can run the sensor at a slower pixel clock). The idea is to transfer data over the pci bus to the computer and I cant go over 133 Meg transfers per second. Since I am reading 4 channels @ 40 MHz that works out to be 160 Mbits per second so not possible to transfer the data on fly over the bus (unless I am misunderstanding something). Is there a way to transfer data on the fly over the pci bus other than slowing the pixel clock ? Or how can I effeciently transfer the data data over the bus (even if I have to store and then use a slower clock to transfer the data out).
Reply by ●June 23, 20082008-06-23
On Jun 22, 10:43=A0am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com> wrote:> On Sun, 22 Jun 2008 07:01:10 -0700 (PDT), ertw <gil...@hotmail.com> > wrote: > > >Hi, I am planning to read an image sensor using an FPGA but I am a > >little confused about a bunch of things. Hopefully someone here can > >help me understand the following things: > > >Note: The image sensor output is an ANALOG signal. Datasheet says that > >the READOUT clock is 40MHz. > > It somewhat depends on whereabouts in the sensor's output > signal processing chain you expect to pick up the signal. > Is this a raw sensor chip that you have? =A0Is it hiding > behind a sensor drive/control chipset? =A0Is it already > packaged, supplying standard composite video output? > > > > >1. How is reading of an image sensor using an ADC different then > >reading a random analog signal using an ADC? > > You're right to question this. =A0Of course, at base it isn't - > it's just a matter of sampling an analog signal. =A0But the image > sensor has some slightly strange properties. =A0First off, the > analog signal has already been through some kind of sample- > and-hold step. =A0In an idealised world, with a 40 MHz readout > clock, you would expect to see the analog signal "flat" for > 25ns while it delivers the sampled signal for one pixel, > and then make a step change to a different voltage for the > next pixel which again would last for 25ns, and so on. > > In the real world, of course, it ain't that simple. =A0First, > you have the limited bandwidth of the analog signal processing > chain (inside the image sensor and its support chips) which will > cause this idealised stair-step waveform to have all manner of > non-ideal characteristics. =A0Indeed, if the output signal is > designed for use as an analog composite video signal, then > it will probably have been through a low-pass filter to remove > most of the staircase-like behaviour. =A0Second, even before > the analog signal made it as far as the staircase waveform > I described, there will be a lot of business about sampling > and resetting the image sensor's output structures. > > In summary, all of this stuff says that you should take > care to sample the analog signal exactly when the camera > manufacturer tells you to sample it, with the 40 MHz sample > clock that they've so thoughtfully provided (I hope!). > > > =A0 =A0 =A0And the amount of data or memory required can be calculated > >using: > > =A0 =A0 =A0Sampling rate x ADC resolution > > > =A0 =A0- This is different in case of an image sensor > > Of course it is not different. =A0If you get 16 bits, 40M times > per second, then you have 640Mbit/sec to handle. > > > Do I use an ADC running at 40 MSamples/second since the > > pixel output 40 MHz ? > > If the camera manufacturer gives you a "sampled analog" > output and a sampling clock, then yes. =A0On the other hand, > if all you have is a composite analog video output with > no sampling clock, you are entirely free to choose your > sampling rate - bearing in mind that it may not match > up with pixels on the camera, and therefore you are > trusting the camera's low-pass filter to do a good job > of the interpolation for you. > > > =A0 =A0 =A0How do I calculate the required memory ? > > > =A0 =A0 =A0Is it simply 40 MS/s x 16 bits (adc resolution) for each pix=el> > eh? =A0 > > >or just 16 bits per pixel ? > > Only the very highest quality cameras give an output that's worth > digitising to 16 bit precision. =A010 bits should be enough for > anyone; 8 bits is often adequate for low-spec applications such > as webcams and surveillance. > > > =A0 =A0 =A0If each frame is 320 x 256 then data per frame is - (320x256=) x> >16 bits, why not multiple this by 40 MS/s like > > =A0 =A0 =A0you would for any other random analog signal ? > > I have no idea what you mean. =A040 MHz is the *pixel* rate. =A0Let's > follow that through: > > =A0 40 MHz, 320 pixels on a line - that's 8 microseconds per line. > =A0 But don't forget to add the extra 2us or thereabouts that will > =A0 be needed for horizontal synch or whatever. =A0Let's guess 10us > =A0 per line. > > =A0 256 lines per image, 10us per line, that's 2.56 milliseconds per > =A0 image - but, again, we need to add a margin for frame synch. > =A0 Perhaps 3ms per image. > > =A0 Wow, you're getting 330 images per second - that's way fast. > > But whatever you do, if you sample your ADC at 40 MHz then you > get 40 million samples per second! > > ~~~~~~~~~~~~~~~~~~~~~~~ > > More questions: > > What about colour? =A0Or is this a monochrome sensor? > > Do you get explicit frame and line synch signals from the > camera, or must you extract them from the composite > video signal? > > Must you create the camera's internal line, pixel and field > clocks yourself in the FPGA, or does the camera already have > clock generators in its support circuitry? > > ~~~~~~~~~~~~~~~~~~~~~~ > > You youngsters have it so easy :-) =A0The first CCD camera > controller I did had about 60 MSI chips in it, an unholy > mess of PALs, TTL, CMOS, special-purpose level shifters > for the camera clocks (TSC426, anyone?), sample-and-hold > and analog switch devices to capture the camera output, > some wild high-speed video amplifiers (LM533)... =A0And > the imaging device itself, from Fairchild IIRC, was only > NTSC-video resolution and cost around $300. =A0Things have > moved on a little in the last quarter-century... > -- > Jonathan Bromley, Consultant > > DOULOS - Developing Design Know-how > VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services > > Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK > jonathan.brom...@MYCOMPANY.comhttp://www.MYCOMPANY.com > > The contents of this message may contain personal views which > are not the views of Doulos Ltd., unless specifically stated.Guys, Thanks a lot for the help. Jonathan your explanation was great ... Answers to the questions you asked - - Its a monochrome sensor - I do get explicit frame and line signals from the sensor - Sensor does not have any clock generating circuitary (I have to provide the clock, or pixel clock to the sensor, not sure if I was clear about that in the previous post). I have a few more questions regarding data storage and processing (I think the readout from the sensor is a little clear in my head now). The sensor is a packaged Integrated circuit with processing applied to the final stage analog signal (thats where I am planing to read it using an ADC). The output is actually 4 differential signals (one for each column) meaning I will need four ADCs (all four video outputs signals come out simultaneously). The resolution that I want is 16 bits. Now, that means I have four parallel channels of 16 bits coming into the FPGA every 25 ns that I need to store somewhere. The total data per frame is: (320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or 160 KBytes / frame. Do you think I can store that much within a xilinx FPGA. I am trying to do 30 frames per seccond which means I have roughly 33 ms per frame but using 40 MHz clock each frame can be read out in 512 microseconds with a whole lot of dead time after each frame (unless I can run the sensor at a slower pixel clock). The idea is to transfer data over the pci bus to the computer and I cant go over 133 Meg transfers per second. Since I am reading 4 channels @ 40 MHz that works out to be 160 Mbits per second so not possible to transfer the data on fly over the bus (unless I am misunderstanding something). Is there a way to transfer data on the fly over the pci bus other than slowing the pixel clock ? Or how can I effeciently transfer the data data over the bus (even if I have to store and then use a slower clock to transfer the data out).
Reply by ●June 23, 20082008-06-23
"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message news:xcadnWI25-LcecLVnZ2dnUVZ_tTinZ2d@comcast.com...> MikeWhy wrote: > (snip) > >> Nyquist relates to sinusoids and periodicity in the signal. The sampling >> period as it relates to Nyquist with your image sensor is the frame rate, >> not the pixel clock/ADC sample rate. The two are not related in a >> meaningful way. Fuhget about it. > > Yes, Nyquist is completely unrelated to the signal coming out > of an image sensor, but it is important in what goes in. > > Specifically, the image sensor samples an analog (image) in > two dimensions, and, for the result to be correct the image itself > must not have spatial frequencies at the sensor surface higher > than half the pixel spacing. Sometimes one trusts the lens to > do that, others an optical low pass filter is used.Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing is a spatial resolution problem, not a spectral aliasing (Nyquist) issue.
Reply by ●June 23, 20082008-06-23
"ertw" <gill81@hotmail.com> wrote in message news:812c4ac9-d1cf-4a1d-a66b-807aeb0c7359@m45g2000hsb.googlegroups.com... Now, that means I have four parallel channels of 16 bits coming into the FPGA every 25 ns that I need to store somewhere. The total data per frame is: (320 x 256) x 16 bits = 1310720 bits/frame OR 163840 Bytes/frame or 160 KBytes / frame. Do you think I can store that much within a xilinx FPGA. I am trying to do 30 frames per seccond which means I have roughly 33 ms per frame but using 40 MHz clock each frame can be read out in 512 microseconds with a whole lot of dead time after each frame (unless I can run the sensor at a slower pixel clock). ========= A block RAM FIFO comes to mind. Maybe even 4 of them, one for each column stream. Search the docs for BRAM. The frames are small enough, and 33ms is long enough that you likely won't need to double buffer. For example, buffering it in larger, slower memory to allow for bus contention.