On 10/18/2016 4:35 AM, David Brown wrote:
> On 18/10/16 00:45, rickman wrote:
>> On 10/17/2016 6:25 AM, David Brown wrote:
>>> On 17/10/16 09:56, rickman wrote:
>>>> On 10/16/2016 8:55 PM, Tim Wescott wrote:
>>>>> On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote:
>>>>>
>>>>>> I found this pretty impressive. I wonder if this is why Intel bought
>>>>>> Altera or if they are not working together on this? Ulpp! Seak
>>>>>> and yea
>>>>>> shall find....
>>>>>>
>>>>>> "Microsoft is using so many FPGA the company has a direct influence
>>>>>> over
>>>>>> the global FPGA supply and demand. Intel executive vice president,
>>>>>> Diane
>>>>>> Bryant, has already stated that Microsoft is the main reason behind
>>>>>> Intel's decision to acquire FPGA-maker, Altera."
>>>>>>
>>>>>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a
>>>>>> Second http://hubs.ly/H04JLSp0
>>>>>>
>>>>>> I guess this will only steer the FPGA market more in the direction of
>>>>>> larger and faster rather than giving us much at the low end of energy
>>>>>> efficient and small FPGAs. That's where I like to live.
>>>>>
>>>>> Hopefully it'll create a vacuum into which other companies will grow.
>>>>> Very possibly not without some pain in the interim. Markets change, we
>>>>> have to adapt.
>>>>
>>>> I've never been clear on the fundamental forces in the FPGA business.
>>>> The major FPGA companies have operated very similarly catering to the
>>>> telecom markets while giving pretty much lip service to the rest of the
>>>> electronics world.
>>>>
>>>> I suppose there is a difference in technology requirements between MCUs
>>>> and FPGAs. MCUs often are not even near the bleeding edge of process
>>>> technology while FPGAs seem to drive it to some extent. Other than
>>>> Intel who seems to always be the first to bring chips out at a given
>>>> process node, the FPGA companies are a close second. But again, I think
>>>> that is driven by their serving the telecom market where density is
>>>> king.
>>>>
>>>> So I don't see any fundamental reasons why FPGAs can't be built on older
>>>> processes to keep price down. If MCUs can be made in a million
>>>> combinations of RAM, Flash and peripherals, why can't FPGAs? Even
>>>> analog is used in MCUs, why can't FPGAs be made with the same processes
>>>> giving us programmable logic combined with a variety of ADC, DAC and
>>>> comparators on the same die. Put them in smaller packages (lower pin
>>>> counts, not the micro pitch BGAs) and let them to be used like MCUs.
>>>
>>> As far as I understand it, there is quite a variation in the types of
>>> processes used - it's not just about the feature size. The number of
>>> layers, the types of layers, the types of doping, the fault tolerance,
>>> etc., all play a part in what fits well on the same die. So you might
>>> easily find that if you put an ADC on a die setup that was good for FPGA
>>> fabric, then the ADC would be a lot worse (speed, accuracy, power
>>> consumption, noise, cost) than usual. Alternatively, your die setup
>>> could be good for the ADC - and then it would give a poor quality FPGA
>>> part.
>>
>> What's a "poor" FPGA?
>
> What is a "good" FPGA? It has fast switching, predictable timing, low
> power, low cost, lots of gates, registers and memory, flexible routing,
> etc. A "poor" FPGA is one that is significantly worse in some or all of
> these features than you might otherwise expect.
So which of these go to hell when you use a process in use by MCU
makers? Heck, you mention Flash putting the clock back a couple of
process nodes, but that is what I am using, Lattice Flash FPGAs.
>> MCUs have digital and usually as fast as possible
>> digital. They also want the lowest possible power consumption. What
>> part of that is bad for an FPGA?
>
> The digital parts of an MCU are fixed. Each gate in an MCU design is a
> tiny fraction of the size, cost, power and latency of a logic element in
> an FPGA. Just compare the speed, die size and power of a hard cpu macro
> in an FPGA with a soft cpu on the same device - the hard macro is hugely
> superior in every way except flexibility.
None of that is relevant. The point is the process used for MCUs with
Flash, RAM and analog is just as good for FPGAs if you aren't trying to
be on the bleeding edge.
> Now, I don't have any good references for what I am writing here - just
> "things I have read" and "things I know about". So if you or anyone
> else knows better, I am happy to be corrected - and if any of this is
> important to you (rather than just for interest), please check it with
> more knowledgeable people. With that disclaimer,...
>
> There are important differences between the die stackup for FPGA design
> and other types of digital logic. The most obvious feature is that for
> high-end FPGA's, there are many more layers in the die than you usually
> get for microcontrollers or even fast cpus. FPGA's need a /lot/ more
> routing lines than fixed digital parts. These routes are mostly highly
> symmetrical, and can be tightly packed because only a small fraction of
> them are ever active in any given design - you don't need enough power
> or heat dissipation for them all. On a microcontroller or other digital
> part, you have far more complex routing patterns, with most routes being
> short distance and you can have most of them active at a time. On
> memory parts, you have a different type of routing pattern again - few
> layers, with a lot of symmetry, and a lot of simultaneous switching on
> some of the buses.
You said the key words... "high-end FPGA's" [sic]. I'm not talking
about high end FPGAs. I'm talking about small parts at the low end
combined with an MCU and analog. Even the Xilinx Zynq parts use very
fast, very power hungry CPUs that require off chip memory. Totally
different market... as usual, the telecom market.
> The point is, the optimal die stackup and process technology for an FPGA
> is different from the optimal setup for an MCU, a memory block, analogue
> blocks, etc. So when you combine these, you are making a lot of
> compromises.
Compromises, yes, "lot of"... I don't know. That's my point. They
make those compromises for MCUs and seem to make it work. You have said
nothing about why a useful FPGA can't be made using the same process as
an MCU. You just talk about what they do when trying to squeeze every
bit out of the silicon for the express purpose of large, fast FPGAs.
Not every design needs large or fast.
> It is relatively easy and cost-effective to take a big, expensive FPGA
> die design and stick a little processor on it somewhere. You can spread
> the normal cpu routing amongst the many FPGA routing layers for better
> power and heat spreading. It will be a little bigger and slower than a
> dedicated cpu die could be, and the cost per mm² is higher - but that
> extra cost is small in the total cost of the chip.
>
> But you cannot take an optimised microcontroller or cpu die design and
> add serious FPGA hardware to it - you simply don't have the routing
> space. You can add some CPLD-type programmable logic without too much
> extra cost (look at the AVR XMega E series, or the PSoC devices) because
> that kind of programmable logic puts more weight on complex logic blocks
> and less on the routing.
This does not make sense at all. First, most CPLD type devices are
actually FPGA type devices in a smaller capacity. Second, and FPGA uses
die space. There is nothing magical about how much space is needed for
routing or anything else. Just add a block of FPGA fabric to an MCU
with an appropriate special interface and Bob's your uncle. The proof
of the pudding is the fact that it has been done. My question is why
this isn't done more often with a wider variety of parts, in particular
more vendors.
> Note that flash is also a poor fit for both MCU and FPGA die stacks.
> For flash, you want a different kind of transistor than for the purely
> digital parts, you have significant analogue areas, and you need to deal
> with high voltages and a charge pump. The match between an MCU and
> flash is not too bad - so the combination of the two parts on the same
> die is clearly a "win" overall. But if you want the best (cheapest,
> fastest, highest density, lowest power) flash block, you don't mix it
> with a cpu on the same die - similarly if you want the best cpu block.
> As far as I know (and as noted above, I may be wrong), Flash FPGA
> devices are made with a large FPGA die and a small serial flash die
> packaged together.
Yes, none of these things want to be on the same die, and yet it
happens. You are mistaken about the Flash FPGAs. Only Xilinx adds a
flash chip to an FPGA chip in one package. That offers little
advantage. The Lattice parts have the flash on the die and offer *much*
faster configuration load times, on the order of 1 ms instead of 100s of
ms.
> You get the same for analogue parts. You can buy devices that are good
> microcontrollers with okay analogue parts built in. You can buy devices
> that are basically high-end analogue parts with a half-decent
> microcontroller tagged on. But you /cannot/ buy a device that has
> high-end analogue interfaces /and/ a high-end processor or
> microcontroller, all on the same die.
So?
> It is just like PCB design. You do not easily match 1000V IGBT
> switchers, 1000-pin 0.4mm pitch BGAs, and 24-bit ADCs on the same board.
>
>
>> Forget the analog. What do you
>> sacrifice by building FPGAs on a line that works well for CPUs with
>> Flash and RAM? If you can also build decent analog with that you get an
>> MCU/FPGA/Analog device that is no worse than current MCUs.
>>
>>
>>> Microcontrollers are made with a compromise. The cpu part is not as
>>> fast or efficient as a pure cpu could be, nor is the flash part, nor the
>>> analogue parts. But they are all good enough that the combination is a
>>> saving (in dollars and watts, as well as mm²) overall.
>>
>> It's not much of a compromise. As you say, they are all good enough. I
>> am sure an FPGA could be combined with little loss of what defines an FPGA.
>>
>
> As I wrote above, the compromise is significant. It is certainly worth
> making in some cases - and I too would like to see such combined
> devices. And I think we will see such devices turning up - technology
> progress will reduce the technical disadvantages, and economy of scale
> will reduce the cost disadvantages. But it is not as simple a matter as
> you might think.
I don't know where you get the "significant" part. They sell literally
billions of MCUs with analog on them each year. Obviously the
compromise is not so bad.
> And then, of course, there is the joys of making tools that let
> developers work easily with the whole system - that is not a small matter.
Only if you try to make it complex and ugly. Interfacing an FPGA to a
CPU is not hard.
> I believe that what we will see first is something more like the
> above-mentioned Atmel XMega E series, or some of the PIC devices
> (AFAIK), where you have a "normal" microcontroller with a bit of
> programmable logic. This will give designers a good deal more
> flexibility in their layouts. Rather than buying a part with 3 UARTs
> and 2 SPI where one of the SPI's shares the pins of one of the UARTs,
> the developer could use the chip's pin switch matrix to get all 5
> interfaces at once. Some simple PLD blocks could give you high-speed
> interfaces without external glue logic, and they could let the chip
> support a wide range of timer functions without the chip designer having
> to think of every desirable combination in advance.
You mean you have seen these before Microsemi (formerly Atmel) came out
with their SmartFusion and SmartFusion2 devices?
>>> But I think there are some FPGA's with basic analogue parts, and
>>> certainly with flash. There are also microcontrollers with some
>>> programmable logic (more CPLD-type logic than FPGA). Maybe we will see
>>> more "compromise" parts in the future, but I doubt if we will see good
>>> analogue bits and good FPGA bits on the same die.
>>
>> I know of one (well one line) from Microsemi (formerly Actel),
>> SmartFusion (not to be confused with SmartFusion2). They have a CM3
>> with SAR ADC and sigma-delta DAC, comparators, etc in addition to the
>> FPGA. So clearly this is possible and it is really a marketing issue,
>> not a technical one.
>
> No, it is a combination of many issues and compromises. When Actel saw
> the success of SmartFusion and thought how they could make a new
> SmartFusion2 family, they did not think "no one really wants analogue
> interfaces, so we can remove that" - they made the sacrifices needed to
> get the other features they needed. It was very much a compromise.
What other features? Engineering doesn't dictate products. Marketing
does. Clearly Microsemi feels there is not enough of a market to
provide the "everything" chip. I've had this discussion with Xilinx
and they don't say it is too hard to do. They say it makes the number
of different line items they have to inventory far too large. That's
not an engineering problem. That is exactly what they do with MCUS,
dozens or even hundreds of different versions. It just needs to be what
your company wants to do... as decided by marketing.
> But you are right that the SmartFusion shows that combinations can be
> made - just as the SmartFusion2 shows that it is not a simple matter.
>
>>
>> The focus seems to be on the FPGA
>
> Indeed. And that is how it (currently, at least) must be if you want
> decent FPGA on the device.
>
>> , but they do give a decent amount of
>> Flash and RAM (up to 512 and 64 kB respectively). My main issue is the
>> very large packages, all BGA except for the ginormous TQ144. I'd like
>> to see 64 and 100 pin QFPs.
>
> The packaging is something that should be easier to change - there is no
> technical reason not to put the same chip in a lower pin package (as
> long as the package is big enough for the die and a carrier pcb, of course).
>
>>
>>
>>> What will, I think, make more of a difference is multi-die packaging -
>>> either as side-by-side dies or horizontally layered dies. But I expect
>>> that to be more on the high-end first (like FPGA die combined with big
>>> ram blocks).
>>
>> Very pointless not to mention costly. You lose a lot running the FPGA
>> to MCU interface through I/O pads for some applications. That is how
>> Intel combined FPGA with their x86 CPUs initially though. But it is a
>> very pricey result.
>
>
> No, it is certainly not pointless - although it certainly /is/ costly at
> the moment. Horizontal side-by-side packaging is an established
> technique, and is used in a number of high-end devices. If you have a
> wide and fast memory bus, then the whole thing can be much smaller,
> simpler and lower power if the dies are adjacent and you have short,
> thin traces between dies on a carrier pcb within the package. The board
> designer has no issues with length or impedance matching, and the line
> drivers are far smaller and lower power.
>
> Vertical die-on-die stacking is a newer technology, with a good deal of
> research into a variety of techniques. It is already in use for
> symmetrical designs such as multi-die DRAM and Flash packages. But the
> real benefit will come with DRAM dies connected to processor or FPGA
> dies. Rather than having a 64-bit wide databus with powerful
> bi-directional drivers, complex serialisation/deserialisation hardware,
> PLL's, etc., a 20-bit address/command bus with tracking of pages,
> pre-fetches, etc., you could just have a full-duplex 512-bit wide
> databus and full address bus, with everything running at a lower clock
> rate and data lines driven over a distance of a mm or two. Total system
> power would be cut drastically, as would latency, and you could drop
> much of the complex interface and control circuitry on both sides of the
> link. Your DRAM starts to look more like wide tightly-coupled SRAM -
> your processor can drop all but its L0 cache.
>
> There are still many manufacturing challenges to overcome, and heat
> management is hard, but it will come - the potential benefits are enormous.
You are talking about an entirely different world than I am. You are
still talking about the markets Xilinx and Altera are going for, large
fast FPGAs. Only expensive parts can use multiple die packages and the
large complex functions you are describing. That is exactly what I
don't need or want.
Look at the data sheet for a 64 pin ARM CM3 CPU chip. You will find
lots of Flash, RAM and analog peripherals. None of them work poorly.
They sell TONS of them, literally.
MCU makers are afraid of FPGAs and don't have access to the patents.
FPGA makers have their telecom blinders on and now, with Microsoft
getting into the server hardware market, that may be the next big thing
for FPGAs.
That's my point. There is no reason why smaller devices can't be made
like the SmartFusion and SmartFusion2. Even those parts are more FPGA
than MCU with hundreds of pins and large packages. I would like to see
products just like a 64 pin MCU with some analog, clock oscillators,
brownout, etc. There is no technical reason why this can't be done.
--
Rick C