FPGARelated.com
Forums

FPGA for audio DSP

Started by RogerG 2 years ago7 replieslatest reply 1 year ago1581 views

I am a complete beginner when it comes to FPGA so any advice would be helpful.

I have been looking at updating my audio system which currently takes an analogue input signal from a CD player, filters it into 3 frequency bands, adjusts the attenuation for each frequency band using an optical encoder with a pic18f4525 processor and 3 dual channel digipots. The attenuated signals are then amplified separately. The system works really well but I am looking to go digital on the front end.

I looked at using a fast processor for the dsp but after a previous forum on embedded.com a member mentioned using fpga for a better solution. After checking into this I think it would work well especially as the signal filtering would best be done in parallel for the three stereo channels. I would take the digital audio signal, probably spdif, convert it to i2s, carry out the filtering, attenuate each frequency channel then convert to analogue for amplification.

Hopefully someone will tell me if this is the right approach. The next problem is what fpga board to start learning with. Many of the starter boards are on long deliveries but I have already downloaded the Lattice Diamond software and am learning a bit more on the circuit logic. Is the lattice system ok and could I progress using a simulation software without a board. My experience on programming is mainly limited to assembly code with some C and visual basic. Is verilog the way to go.


[ - ]
Reply by rdltFebruary 16, 2023

A processor would be faster to develop and test. On the surface a processor with I2S peripherals seems like a better fit.  However you asked about FPGAs, so here we go...

An FPGA has a fixed amount of hardware elements that can be used.  Which FPGA you select is driven by how much hardware you need.  Lattice FPGAs are a good option.  They are limited in the number of dsp elements and memory.  The dsp element is a multiplication and sum and/or accumulation.  The FPGA can multiply much faster then your audio sample rate.  This would allow you to re-use dsp elements, but you have to write the code to make that happen.

Debugging on FPGAs is different then on processors.  There is no printf and no instruction stepping while running on hardware.  Simulation is very important as it is much easier to debug in simulation then in hardware.  Once in hardware you have a logic analyzer type of debugging that will let you capture snapshot of the signals.

To get a feel for Verilog and/or VHDL and the dev process check out https://www.edaplayground.com/. You can create and simulate for free and get a feel for how this works.  For real life examples of HDL code check out https://opencores.org/projects.  

Audio does not usually require linear phase so look at IRR filters to reduce multiplier usage.  FIRs are the goto if you need linear phase but they require more resources. 

Think in terms of data flowing through your system.  Draw out a block diagram of the major elements. I2S --> Filter --> frequency shift --> etc.   Then with each of those element draw a new diagram using basic items like registers, muxes, multipliers, adders, etc.  That will give you a road map for your HDL coding. Don't forget to think about your control path, uart, spi, i2c, etc and plan for how to control your data path.

VHDL vs Verilog, I won't dive too deep into this debate.  Instead I would point you back to edaplayground where you can see and try both.  


 

[ - ]
Reply by RogerGFebruary 16, 2023

Thanks for the information, I have a teensy4.1 which I already have started to experiment with so I may continue to work with that as well as the FPGA. Depends on the time available, however as I am retired I can put quite a bit of resource into it. It took me a long time to build my audio system as I had to learn the assembly code and build all the circuit boards but I got there in the end.

[ - ]
Reply by asserFebruary 16, 2023

Today, the Cortex-m4 microcontrollers and similar ones have a speed up to approx. 100 MFLOPS. It`s enough to perform any real-time sound filtering.

Such a chip has usually some ADC/DAC as well. To connect the high-precision ADC it uses the proper serial ports adapted to it. Their architecture is adapted to filtering. Some filter design tools generate the ASM code of these filters for them. They are usually programmed in C.

FPGA can filter the sound as well. But it would be better to filter the signals which are quantized by the frequency higher than tenths of MHz.

I`ve experience in filtering the sound in FPGA. I usually do with the DSP48 block + BRAM which are controlled by FSM as well as with the separate DSP microprocessor. It implements a loop, which lasts hundreds of clock cycles and is equal to the quantization period.  

[ - ]
Reply by engineer68February 16, 2023
> " But it would be better to filter the signals which are quantized by the frequency higher than tenths of MHz."


or use the FPGA power to process more than one channels the same time thus generating and processing 1024 waves in 128 MIDI channels:
DRUMMIX - A Drum Computer in VHDL on a Xilinx Artix 7 FPGA

The largest FIR currently operates a almost 200MHz in 8 lanes with 64k TAPs each.

[ - ]
Reply by KosednarFebruary 16, 2023

FPGA's have a steep learning curve.   I'm a retired EE and an FPGA veteran.  I designed studio grade video/audio products back in 2000 and have a new patent this year for VLSI devices.  FPGA's are not like microprocessors and more like hardware design.

If you can use a microprocessor to do the job you would be way ahead of the game.  FPGA's suffer greatly from the global chip shortage and you could wait YEARS to receive parts.

I recently designed in Intel/Altera FPGA's and after completion I had to switch parts because Intel is not suppling small companies with parts.

Efinixinc.com has the T8 tron series T8F81C2 FPGA's with 8000 parts in stock at Digikey for $7 ea. and 400 development boards in stock at Digikey for $35 each.The development software for all of the FPGA's are not documented well and have bugs.  The Efinix software requires Verilog, System Verilog or VHDL languages to program their parts(not schematic entry) like Altera.

If you are still interested in uaing FPGAs I suggest Efinix.  When you buy a development board for $35, Efinix gives you a one year license to use their development software.  Their development boards use a USB -b cord only. The software allows you to write in Verilog with debug software integrated.  You can start small and test your code as you learn Verilog. 



[ - ]
Reply by RogerGFebruary 16, 2023

Thanks for the info. I have had a look on their website, hadn't heard of them until you mentioned it. Will certainly bear it in mind. I will also follow the microprocessor route.

[ - ]
Reply by engineer68February 16, 2023

I would suggest that an MCU / audio DSP chip is the best solution for this, since these usually already provide the necessary interfaces nowadays. FPGAs should only be used for special reasons, since you have to more or less manually build or throw down and connect hardware that is already present in such MCUs, and then synchronize it in the flow.

I2S could be such a reason, but it's not really necessary in your case. S/PDIF would be easiest to convert to PCM directly in an FPGA for subsequent processing. If you want to include a chip as S/PDIF to I2S, you are better off with an audio DSP. Some of them even have an S/PDIF interface!

However, if you want to start using audio @ FPGA for learning, you should use a board that already includes an audio chip with I2S. Some of the boards from Digilent (Xilinx) and Terassic (Altera) have such a chip and provide 3.5mm phone jacks for feeding audio signals directly.

You may want to start with some examples, understand the interfaces and logical/physical timing, and create a pass-through design to insert your code into. Filtering in FPGAs is more or less easy when it comes to maintaining functional timing and achieving low latency, and at the same time difficult when it comes to achieving physical timing (a thing that does not exist in CPUs).

Once this is understood, one can decide whether to use IIR or FIR filters for the upcoming purposes (or both side by side, as usual), thus perfecting a design and adapting it to individual needs.

In general, FPGAs offer a wide range of optimization aspects, from processing speed <-> design/development speed, to usage, to precision. Some time ago I wrote an article about this in a forum, comparing a manual approach of filter design with automatic implementation by MATLAB.

That filter had been proposed in the DSP-group and I used it as an example to sho different approaches:
Effizienz von MATLAB und HLS bei VHDL - Mikrocontroller.net

Below in the text there is a solution describing how to realize that filter in a tricky way in order to save multipliers in the FPGA.