What’s Next in the World of FPGAs?

By Gus Lignos

Vice President Sales, MoSys

I just attended the Next FPGA Platform in San Jose. It was great to hear presentations, Q&A and panel discussions from representatives across the FPGA ecosystem, including executives from the FPGA manufacturers themselves: Achronix, Intel and Xilinx.

One of the speakers was Doug Burger from Microsoft who has been driving their Data Center scale out for many years. He highlighted that Microsoft started their investigations of FPGA in the Data Center in 2011 and talked about how they’ve continued to leverage FPGAs such that they’ve now deployed over a 7-digit number of FPGAs and that number continues to grow exponentially. This massive growth of FPGAs in the data center is being driven by the flexibility that FPGAs enable and how FPGAs help Microsoft execute on their strategy of being agile in response to customer driven deployments. He mentioned FPGAs as the vehicle to help cloud-scale innovation and scale-out in the post, “Moore’s Law” era.

Similarly, in his keynote speech, Xilinx CTO, Ivo Bolzens discussed the role of FPGAs in response to the breakdown of Moore’s Law. Ivo said there’s a perfect storm for compute acceleration and the role of FPGAs in this era of Big Data – where it’s critical to extract useful information from the ever-increasing velocity, variability and volume of data. CPUs cannot keep up. Data Center build-outs require disaggregation of resources and the need for more flexibility for use of compute infrastructure, which created the need for FPGAs as a base-building block. Ivo continued to say that FPGAs are very different than CPUs in data center build outs – think of them from a concept of data flow. He went on to say that FPGAs allow huge amounts of programmability – both in terms of compute resources and instructions AND also in memory hierarchy. Search algorithms use thousands of compute nodes and the movement of data within the chip and across nodes in the network remain an ongoing issue. This is where MoSys fits in. For offloading FPGA on-chip memory with its Bandwidth Engine family of high access rate serial attached memory accelerator ICs and with its recently announced entry into the IP business, MoSys is uniquely positioned to accelerate FPGA deployments. MoSys just issued a press release announcing its entry into the IP space with the introduction of the Graph Memory Engine (GME) which can run across various hardware platforms, from CPUs to FPGAs and for highest performance, on the embedded RISC cores on its high end Serial Memory Programmable HyperSpeed Engine (PHE) IC. Ivo talked about Data Center scale out no longer being a problem for tens of cores but rather, thousands of nodes. The MoSys GME can process anywhere from 30M/nodes per second on a single CPU all the way up to 3B nodes/sec.

Links:

 If you are looking for more technical information or need to discuss your technical challenges with an expert, we are happy to help. Email us and we will arrange to have one of our technical specialists speak with you.  You can also sign up for our newsletter. Finally, please follow us on social media so we can keep in touch.

Share on social media