Zoho Corporation has revealed that it will use NVIDIA’s AI-accelerated computing platform, which includes NVIDIA NeMo, part of NVIDIA AI Enterprise software, to develop and deploy its large language models (LLMs) in its SaaS applications. These LLMs will be available to Zoho’s more than 700,000 customers worldwide through ManageEngine and Zoho.com.

In the last year, Zoho has invested more than $10 million in AI and GPU technology from NVIDIA, and plans to invest another $10 million next year. This announcement was made during the NVIDIA AI Summit event in Mumbai.

«Many LLMs currently on the market are designed for consumer use, offering little value to businesses. At Zoho, our mission is to develop LLMs specifically tailored for a wide range of business use cases. Owning all of our technology, with products spanning multiple business functions, allows us to integrate the essential component that makes AI truly effective: context,” says Ramprakash Ramamoorthy, AI Director at Zoho Corporation.

Zoho Corporation will use NVIDIA NeMo

Zoho has been developing its own AI technology for over a decade, contextually integrating it into its wide range of 100+ products across the ManageEngine and Zoho divisions. Its approach to AI is multimodal, aimed at obtaining contextual intelligence that helps users make business decisions.

The company is creating small and medium language models, different from LLMs, offering options to use models of different sizes to provide better results in a variety of use cases. This allows even companies with less data to benefit from AI. Privacy is a core principle in Zoho’s AI strategy, so its LLMs won’t learn from customer data.

Vishal Dhupar, NVIDIA’s general manager for South Asia, said the ability to choose from a range of AI model sizes allows businesses to tailor their AI solutions to their specific needs, balancing performance with cost-effectiveness. Using NVIDIA’s AI software and accelerated computing platform, Zoho is building a wide range of models to meet the diverse needs of its enterprise customers.

Creation of Large Language Models (LLMs)

Through this collaboration, Zoho will power its LLMs on the NVIDIA accelerated computing platform with NVIDIA Hopper GPUs, using the comprehensive NVIDIA NeMo platform to develop personalized generative AI, including LLMs, multimodal, vision and speech AI.

Additionally, Zoho is testing NVIDIA TensorRT-LLM to optimize its LLMs for deployment, and has already seen a 60% increase in performance and a 35% reduction in latency compared to the previously used open source framework. The company is also improving other tasks, such as speech-to-text conversion, using NVIDIA’s accelerated computing infrastructure.