Forums

FPGA for audio DSP

Started by RogerG 4 weeks ago5 replieslatest reply 3 weeks ago35 views

I am a complete beginner when it comes to FPGA so any advice would be helpful.

I have been looking at updating my audio system which currently takes an analogue input signal from a CD player, filters it into 3 frequency bands, adjusts the attenuation for each frequency band using an optical encoder with a pic18f4525 processor and 3 dual channel digipots. The attenuated signals are then amplified separately. The system works really well but I am looking to go digital on the front end.

I looked at using a fast processor for the dsp but after a previous forum on embedded.com a member mentioned using fpga for a better solution. After checking into this I think it would work well especially as the signal filtering would best be done in parallel for the three stereo channels. I would take the digital audio signal, probably spdif, convert it to i2s, carry out the filtering, attenuate each frequency channel then convert to analogue for amplification.

Hopefully someone will tell me if this is the right approach. The next problem is what fpga board to start learning with. Many of the starter boards are on long deliveries but I have already downloaded the Lattice Diamond software and am learning a bit more on the circuit logic. Is the lattice system ok and could I progress using a simulation software without a board. My experience on programming is mainly limited to assembly code with some C and visual basic. Is verilog the way to go.


[ - ]
Reply by rdltJune 10, 2022

A processor would be faster to develop and test. On the surface a processor with I2S peripherals seems like a better fit.  However you asked about FPGAs, so here we go...

An FPGA has a fixed amount of hardware elements that can be used.  Which FPGA you select is driven by how much hardware you need.  Lattice FPGAs are a good option.  They are limited in the number of dsp elements and memory.  The dsp element is a multiplication and sum and/or accumulation.  The FPGA can multiply much faster then your audio sample rate.  This would allow you to re-use dsp elements, but you have to write the code to make that happen.

Debugging on FPGAs is different then on processors.  There is no printf and no instruction stepping while running on hardware.  Simulation is very important as it is much easier to debug in simulation then in hardware.  Once in hardware you have a logic analyzer type of debugging that will let you capture snapshot of the signals.

To get a feel for Verilog and/or VHDL and the dev process check out https://www.edaplayground.com/. You can create and simulate for free and get a feel for how this works.  For real life examples of HDL code check out https://opencores.org/projects.  

Audio does not usually require linear phase so look at IRR filters to reduce multiplier usage.  FIRs are the goto if you need linear phase but they require more resources. 

Think in terms of data flowing through your system.  Draw out a block diagram of the major elements. I2S --> Filter --> frequency shift --> etc.   Then with each of those element draw a new diagram using basic items like registers, muxes, multipliers, adders, etc.  That will give you a road map for your HDL coding. Don't forget to think about your control path, uart, spi, i2c, etc and plan for how to control your data path.

VHDL vs Verilog, I won't dive too deep into this debate.  Instead I would point you back to edaplayground where you can see and try both.  


 

[ - ]
Reply by RogerGJune 11, 2022

Thanks for the information, I have a teensy4.1 which I already have started to experiment with so I may continue to work with that as well as the FPGA. Depends on the time available, however as I am retired I can put quite a bit of resource into it. It took me a long time to build my audio system as I had to learn the assembly code and build all the circuit boards but I got there in the end.

[ - ]
Reply by KosednarJune 12, 2022

FPGA's have a steep learning curve.   I'm a retired EE and an FPGA veteran.  I designed studio grade video/audio products back in 2000 and have a new patent this year for VLSI devices.  FPGA's are not like microprocessors and more like hardware design.

If you can use a microprocessor to do the job you would be way ahead of the game.  FPGA's suffer greatly from the global chip shortage and you could wait YEARS to receive parts.

I recently designed in Intel/Altera FPGA's and after completion I had to switch parts because Intel is not suppling small companies with parts.

Efinixinc.com has the T8 tron series T8F81C2 FPGA's with 8000 parts in stock at Digikey for $7 ea. and 400 development boards in stock at Digikey for $35 each.The development software for all of the FPGA's are not documented well and have bugs.  The Efinix software requires Verilog, System Verilog or VHDL languages to program their parts(not schematic entry) like Altera.

If you are still interested in uaing FPGAs I suggest Efinix.  When you buy a development board for $35, Efinix gives you a one year license to use their development software.  Their development boards use a USB -b cord only. The software allows you to write in Verilog with debug software integrated.  You can start small and test your code as you learn Verilog. 



[ - ]
Reply by RogerGJune 14, 2022

Thanks for the info. I have had a look on their website, hadn't heard of them until you mentioned it. Will certainly bear it in mind. I will also follow the microprocessor route.

[ - ]
Reply by asserJune 14, 2022

Today, the Cortex-m4 microcontrollers and similar ones have a speed up to approx. 100 MFLOPS. It`s enough to perform any real-time sound filtering.

Such a chip has usually some ADC/DAC as well. To connect the high-precision ADC it uses the proper serial ports adapted to it. Their architecture is adapted to filtering. Some filter design tools generate the ASM code of these filters for them. They are usually programmed in C.

FPGA can filter the sound as well. But it would be better to filter the signals which are quantized by the frequency higher than tenths of MHz.

I`ve experience in filtering the sound in FPGA. I usually do with the DSP48 block + BRAM which are controlled by FSM as well as with the separate DSP microprocessor. It implements a loop, which lasts hundreds of clock cycles and is equal to the quantization period.