I noticed that there are components for integer, fixed-point, and floating-point math in SciCompiler. The floating-point math even lets me choose resolution and bit width. Why can’t I just work with double precision like I do in C, which accurately represents most numbers?
Unlike a CPU, an FPGA is a hardware device where each component physically consumes resources, and those resources are finite. When working with floating-point (FP) math on an FPGA, the operations require significantly more hardware resources compared to fixed-point operations. In fact, floating-point operations can use tens of times more resources than fixed-point equivalents.
In an FPGA, each block or mathematical operation physically erodes part of the available resources. Since SciCompiler generates code for FPGAs, each component—whether it’s floating-point, fixed-point, or integer math—must fit within the hardware constraints of the FPGA. Floating-point operations, especially double precision, consume a large amount of FPGA logic and memory blocks, limiting the complexity and number of operations that can be implemented.
Additionally, the latency of floating-point operations is much higher than that of fixed-point. Floating-point operations can have a latency ranging from ten to several dozen clock cycles, whereas fixed-point operations can have latencies as low as 1 or even 0 clock cycles. This makes fixed-point math much more efficient for real-time processing on FPGAs, where speed and resource efficiency are critical.
Therefore, while floating-point gives you flexibility in number representation, using fixed-point math is a common and highly recommended practice in FPGA designs to optimize both resource usage and performance.