Separator

Snowflake Teams with NVIDIA to Deliver Full-Stack AI Platform for Customers

Separator

imageSnowflake, the Data Cloud Company announces at NVIDIA GTC an expanded collaboration with NVIDIA that further empowers enterprise customers with an AI platform, bringing together the full-stack NVIDIA accelerated platform with the trusted data foundation and secure AI of Snowflake’s Data Cloud. Together, Snowflake and NVIDIA deliver a secure and formidable combination of infrastructure and compute capabilities designed to unlock and accelerate AI productivity and fuel business transformations across every industry.

Sridhar Ramaswamy, CEO, Snowflake says, “Data is the fuel for AI, making it essential to establishing an effective AI strategy.  Our partnership with NVIDIA is delivering a secure, scalable and easy-to-use platform for trusted enterprise data. And we take the complexity out of AI, empowering users of all types, regardless of their technical expertise, to quickly and easily realize the benefits of AI.”

Expanding on Snowflake and NVIDIA’s previously announced NVIDIA NeMo integration, Snowflake customers will soon be able to utilize NVIDIA NeMo Retriever directly on their proprietary data in the Data Cloud, all while maintaining data security, privacy, and governance seamlessly through Snowflake’s built-in capabilities.

Jensen Huang, founder and CEO, NVIDIA says, “Enterprise data is the foundation for custom AI applications that can generate intelligence and reveal new insights. Bringing NVIDIA accelerated computing and software to Snowflake’s data platform can turbocharge enterprise AI adoption by helping customers build, deploy and manage secure generative AI applications.”

 

Nemo Retriever enhances performance and scalability of chatbot applications and can accelerate time to value for the 400+ enterprises already building AI applications with Snowflake Cortex (some features may be in preview), Snowflake's fully managed large language model (LLM) and vector search service. The expanded collaboration will also include the availability of NVIDIA TensorRT software to deliver low latency and high throughput for deep learning inference applications to enhance LLM-based search capabilities.