You are viewing a single comment's thread from:

RE: LeoThread 2024-10-22 09:10

in LeoFinance4 months ago

“There’s a lot we have in common with generative models, and a lot we don’t. But one thing that’s absolutely different is the latency,” Lappas explained. “Our inference needs to happen in microseconds so that we can close the loop on these processes.” With no off-the-shelf solution available for the data or the compute, they had to build the GPU/FPGA “AI on steroids” combo from scratch.