"Colin Paul Gloster" <Colin_Paul_Gloster@ACM.org> schrieb im Newsbeitrag
news:20060503210430.A70973@docenti.ing.unipi.it...
> Of course implementing parallelism with real parallelism is easier, but
> verifying something whether it is implemented with true parallelism or
> interleaved sequential code should take the same effort no matter the
> implementation: check whether the inputs and the outputs match.
I still believe that verifying parallel structures on a PLD is easier than
on a CPU. Imagine a program, that has to handle certain communication
interfaces (CAN, RS232,..) and has to measure some real-time signals at the
same time. In case of a PLD these modules could be checked separately, since
no dependencies according to a single CPU are present. In case of a CPU
based system this dependencies are crucial (in real-time systems) and a lot
of test efford is spend to examine these.
Reply by Colin Paul Gloster●May 3, 20062006-05-03
Kolja Sulimma wrote in
news:4448ed30$0$18265$9b4e6d93@newsread2.arcor-online.net :
"[..]
Formal model checking and property checking are becoming mainstream for
hardware development but are hardly ever used for software development.
[..]"
There is a difference between being mainstream and being available. Formal
methods have existed for a long time for software. Just as one should not
be willing to take a software developer opposed to formal methods, one
should not be willing to take a hardware developer who has only acquired
exposure to formal methods because of a trend instead of actively
appreciating the need for formal methods.
On Thu, 27 Apr 2006, Falk Salewski wrote:
"[..] I am also of the opinion that applications realizing
hard real-time parallel functionality are easier to verify on a device
allowing real parallelism.
[..]"
Of course implementing parallelism with real parallelism is easier, but
verifying something whether it is implemented with true parallelism or
interleaved sequential code should take the same effort no matter the
implementation: check whether the inputs and the outputs match.
Reply by Rene Tschaggelar●May 3, 20062006-05-03
Falk Salewski wrote:
> I am doing some research on the reliability of microcontrollers software in
> comparison to hardware description languages for PLDs (CPLD/FPGA).
>
> Another interesting point is whether there are general benefits of one
> hardware regarding reliability, e.g. in an automotive environment.
>
>
>
> I read about certification problems if a SRAM based FPGA is programmed every
> system start and that Flash or Fuse based systems are preferable. I also
> read that CPLDs (Flash) in general are more robust than FPGAs.
>
> Can you confirm/confute this?
What are the allowed failure modes ? All of them ?
That includes alpha particles, fast protons, thermal
cycles, vibrations, supply and signal issues, electric
and magnetic fields, the lot.
Plus how failure prof is the design. How can it handle
unexpected values. While in some points 90nm technology
is more sensitive, it is not that an acre of 2N3055
doing the same would be more reliable.
Rene
--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
Reply by Falk Salewski●April 27, 20062006-04-27
"Kolja Sulimma" <news@sulimma.de> schrieb im Newsbeitrag
news:4448ed30$0$18265$9b4e6d93@newsread2.arcor-online.net...
> Falk Salewski schrieb:
>> I am doing some research on the reliability of microcontrollers software
>> in
>> comparison to hardware description languages for PLDs (CPLD/FPGA).
>>
>> Another interesting point is whether there are general benefits of one
>> hardware regarding reliability, e.g. in an automotive environment.
>
> This all depends on the type of errors you are talking about. To get an
> overall estimate will be really difficult.
>
> E.g. in automotives a big issue are real time constraint violations when
> many things happen at once. You can easily specify the timing of most
> hardware implemented algorithms with a granularity of nanoseconds
> because there is real concurrency in the implementation. For uC it is
> hard to get below tens of microseconds.
>
> Also, error detection and correction on ALUs, busses and memory is just
> not available for commercial uC, while you can easily implement it for
> your FPGA circuit. In theory a uC using all these techniques would be
> more reliable, but if you can not buy it....
> (BTW: I talked to Bosch about that topic, and apparently the volume of
> their orders is not big enough to have Motorola design such a uC for
> them.)
>
> Formal model checking and property checking are becoming mainstream for
> hardware development but are hardly ever used for software development.
>
> These are all factors in favor of FPGAs that are often not considered,
> but I am sure that you come up with many reasons why uCs are more
> reliable. (Less transistors for example)
>
> Kolja Sulimma
>
Thanks for your reply! I am also of the opinion that applications realizing
hard real-time parallel functionality are easier to verify on a device
allowing real parallelism.
Possible integration of error detection and correction functionalities in
FPGAs are also a big plus, in my opinion.
Finally it seems that the aspect MCU vs. FPGA regarding reliability is,
again, application dependent.
Falk Salewski
Reply by radarman●April 24, 20062006-04-24
Trust me, it is more complicated than that, but there are plenty of
both legit and questionable reasons for going with external buffers.
For one, we are typically driving very long cable harnesses or large
backplanes with lots of fan-out. While an FPGA pin might be able to do
it, we are guaranteed performance with the external parts. There is
also the fact that a technician can reasonably replace, or probe, a
buffer chip - while a BGA repair requires a trip back to the factory.
Then, there is debug and integration. Our integration and test cycles
are already too short to allow for a two-week trip back to the factory
for rework.
Also, even at just 5%, the buffers are cheaper.
Reply by Simon Peacock●April 22, 20062006-04-22
It is probable that the buffer, although offering more pins to cause faults
(Military boards will be x-rayed and each solder joint inspected don't
forget) offer a level of protection that FPGA pins can't.
A typical "Interface" buffer chip has a higher drive strength, better ESD
Protection thru bigger geometry and the "real" outside connections have ESD
diodes and the proper interface for the conditions, including current limit,
voltage control, hot-pugging support etc.
Simon
"Kolja Sulimma" <news@sulimma.de> wrote in message
news:444a64d2$0$11079$9b4e6d93@newsread4.arcor-online.net...
> radarman schrieb:
> > Where I work, we aren't allowed to directly connect FPGA or CPLD pins
> > directly to external connectors, save for on-board test points (like
> > Mictor connectors). Everything goes through external buffers or
> > registers. Yes, it does add latency, but it does protect
> > hard-to-replace BGA's from damage.
> >
> > Of course, I work on military hardware, and reliability is a major
> > factor. While most things are replaced at LRU (chassis) level, there
> > are some systems where the customer is allowed to replace individual
> > boards. Usually, this happens in a customer repair facility, and is
> > done by military technicians, but still - it pays to go the extra mile.
>
> I thought in military applications reliability is more important than
> cost. For standard buffers I would argue that you get a much higher
> failure rate with the buffers than without. You have three times the
> number of solder joints and much more parts after all.
> Also, many buffer chips are less robust then FPGA pins. Some don't
> even have protection diodes.
> Of course if you use special ESD protection buffers all this changes.
> But some passive protection to the FPGA pin might give you the same
effect.
>
> > The other factor is that every board costs so much, that they are
> > almost never thrown away, and instead reworked. It is much simpler to
> > replace a buffer chip than a BGA.
>
> With the right tools it is not really more complicated to replace a bga
> or an SOIC. Local IR-heating, pulling the chip, cleaning the board,
> placing a new chip, local IR-heating again.
> Cleaning takes longer because there are more pads. But that's about it.
> I doubt that the cost of replacing the BGA is more than 5% of the cost
> of isolating the defect.
>
> Kolja Sulimma
Reply by Kolja Sulimma●April 22, 20062006-04-22
radarman schrieb:
> Where I work, we aren't allowed to directly connect FPGA or CPLD pins
> directly to external connectors, save for on-board test points (like
> Mictor connectors). Everything goes through external buffers or
> registers. Yes, it does add latency, but it does protect
> hard-to-replace BGA's from damage.
>
> Of course, I work on military hardware, and reliability is a major
> factor. While most things are replaced at LRU (chassis) level, there
> are some systems where the customer is allowed to replace individual
> boards. Usually, this happens in a customer repair facility, and is
> done by military technicians, but still - it pays to go the extra mile.
I thought in military applications reliability is more important than
cost. For standard buffers I would argue that you get a much higher
failure rate with the buffers than without. You have three times the
number of solder joints and much more parts after all.
Also, many buffer chips are less robust then FPGA pins. Some don't
even have protection diodes.
Of course if you use special ESD protection buffers all this changes.
But some passive protection to the FPGA pin might give you the same effect.
> The other factor is that every board costs so much, that they are
> almost never thrown away, and instead reworked. It is much simpler to
> replace a buffer chip than a BGA.
With the right tools it is not really more complicated to replace a bga
or an SOIC. Local IR-heating, pulling the chip, cleaning the board,
placing a new chip, local IR-heating again.
Cleaning takes longer because there are more pads. But that's about it.
I doubt that the cost of replacing the BGA is more than 5% of the cost
of isolating the defect.
Kolja Sulimma
Reply by Kolja Sulimma●April 21, 20062006-04-21
Falk Salewski schrieb:
> I am doing some research on the reliability of microcontrollers software in
> comparison to hardware description languages for PLDs (CPLD/FPGA).
>
> Another interesting point is whether there are general benefits of one
> hardware regarding reliability, e.g. in an automotive environment.
This all depends on the type of errors you are talking about. To get an
overall estimate will be really difficult.
E.g. in automotives a big issue are real time constraint violations when
many things happen at once. You can easily specify the timing of most
hardware implemented algorithms with a granularity of nanoseconds
because there is real concurrency in the implementation. For uC it is
hard to get below tens of microseconds.
Also, error detection and correction on ALUs, busses and memory is just
not available for commercial uC, while you can easily implement it for
your FPGA circuit. In theory a uC using all these techniques would be
more reliable, but if you can not buy it....
(BTW: I talked to Bosch about that topic, and apparently the volume of
their orders is not big enough to have Motorola design such a uC for them.)
Formal model checking and property checking are becoming mainstream for
hardware development but are hardly ever used for software development.
These are all factors in favor of FPGAs that are often not considered,
but I am sure that you come up with many reasons why uCs are more
reliable. (Less transistors for example)
Kolja Sulimma
Reply by Falk Salewski●April 21, 20062006-04-21
Thanks for your reply!We also had problems with CPLDs dying according to
probably to high voltages (lab course with students). We are using Spartan
FPGAs in combination with busswitches as interface circuits now and have no
problems any more.
However, if you are using many I/O lines this additional protection needs
some PCB space... seems like an advantage to microcontrollers. We let
students work with Atmel ATmega16 and none of them died during the last
year. And they did a lot to them...
Regards
Falk
"radarman" <jshamlet@gmail.com> schrieb im Newsbeitrag
news:1145585901.442569.73850@t31g2000cwb.googlegroups.com...
> Where I work, we aren't allowed to directly connect FPGA or CPLD pins
> directly to external connectors, save for on-board test points (like
> Mictor connectors). Everything goes through external buffers or
> registers. Yes, it does add latency, but it does protect
> hard-to-replace BGA's from damage.
>
> Of course, I work on military hardware, and reliability is a major
> factor. While most things are replaced at LRU (chassis) level, there
> are some systems where the customer is allowed to replace individual
> boards. Usually, this happens in a customer repair facility, and is
> done by military technicians, but still - it pays to go the extra mile.
>
> The other factor is that every board costs so much, that they are
> almost never thrown away, and instead reworked. It is much simpler to
> replace a buffer chip than a BGA.
>
> It is more expensive, but if you are worried about damaging boards with
> ESD or want to hot-slot safely, it's worth it.
>
> BTW - we use SRAM based FPGA's for everything except space
> applications. There, we use fusible-link devices from Actel or ASICs. A
> typical system will load dynamically over VME or PCI from a host
> controller, rather than local configuration memories - but that really
> shouldn't be a factor. (we do it to simplify inventory issues where a
> board may be sold to different customers)
>
> We do occasionally need a PAL or CPLD to implement something that just
> needs to be off-chip. A good example is controlling the PCI/VME based
> FPGA configuration process. (specifically, we use them as SVF players)
> We generally use flash-based devices for that, since they generally
> only need to be updated once - and speed isn't usually a concern.
>
> As far as I can tell, the SRAM FPGA's have been working just fine
> across a very wide spectrum of environmental conditions for a long
> time. Their reliability is actually quite good.
>
Reply by radarman●April 20, 20062006-04-20
Where I work, we aren't allowed to directly connect FPGA or CPLD pins
directly to external connectors, save for on-board test points (like
Mictor connectors). Everything goes through external buffers or
registers. Yes, it does add latency, but it does protect
hard-to-replace BGA's from damage.
Of course, I work on military hardware, and reliability is a major
factor. While most things are replaced at LRU (chassis) level, there
are some systems where the customer is allowed to replace individual
boards. Usually, this happens in a customer repair facility, and is
done by military technicians, but still - it pays to go the extra mile.
The other factor is that every board costs so much, that they are
almost never thrown away, and instead reworked. It is much simpler to
replace a buffer chip than a BGA.
It is more expensive, but if you are worried about damaging boards with
ESD or want to hot-slot safely, it's worth it.
BTW - we use SRAM based FPGA's for everything except space
applications. There, we use fusible-link devices from Actel or ASICs. A
typical system will load dynamically over VME or PCI from a host
controller, rather than local configuration memories - but that really
shouldn't be a factor. (we do it to simplify inventory issues where a
board may be sold to different customers)
We do occasionally need a PAL or CPLD to implement something that just
needs to be off-chip. A good example is controlling the PCI/VME based
FPGA configuration process. (specifically, we use them as SVF players)
We generally use flash-based devices for that, since they generally
only need to be updated once - and speed isn't usually a concern.
As far as I can tell, the SRAM FPGA's have been working just fine
across a very wide spectrum of environmental conditions for a long
time. Their reliability is actually quite good.