|
This video is part of the appearance, “NeuroBlade Presents at Cloud Field Day 19“. It was recorded as part of Cloud Field Day 19 at 8:00-9:30 on January 31, 2024.
Watch on YouTube
Watch on Vimeo
Mordechai Blaunstein discusses the critical need for data analytics acceleration within the big data analytics sphere. By showcasing NeuroBlade’s SQL Processing Unit (SPU), he demonstrates its effectiveness in enhancing the speed and scale of big data processing. Attendees can gain insights into the latest trends that are defining the big data analytics landscape, including Data Lake architecture and open standards, and learn how NeuroBlade is addressing the demand for rapid data analytics acceleration in data centers.
Blaunstein emphasizes the exponential growth of data and the associated challenges such as compute, networking, storage, energy, space, and software. He outlines the market trends that influenced their solution, including the shift towards Data Lake and Data Lakehouse architectures, the need for rapid data analytics in the cloud, and the adoption of open standards.
Blaunstein explains that traditional CPUs cannot efficiently scale to meet the demands of big data analytics, and while GPUs offer better performance, they come with high costs and limited availability. NeuroBlade’s SPU is presented as a purpose-built accelerator for data analytics, designed to enhance performance without exorbitant costs.
The SPU can be integrated into existing software ecosystems and supports common workloads like Spark, Presto, and Clickhouse. It is offered as a PCIe card for easy integration into servers. Blaunstein mentions that the SPU is particularly suited for large enterprises with significant data that cannot be moved to the cloud due to regulatory or other constraints.
Questions from the audience address topics such as the SPU’s target market, competition with other accelerators, integration with analytical layers, licensing, and open-source availability of the software SDK and API. Blaunstein clarifies that the SPU is optimized for one card per server, as it already fully utilizes the server’s I/O capabilities, and that sharding is handled by the data analytics tools above NeuroBlade’s layer.
Personnel: Mordi Blaunstein