Nvidia demonstrated a new stage in the development of artificial intelligence by launching an innovative Chat with RTX tool. For now, owners of GeForce RTX 30 and 40 series video cards will be able to use it. This solution allows running an AI-powered chatbot on a local Windows PC, giving it access to files and documents stored on the device.

Users can easily search for information by asking questions. The smart system will analyze the local data available to it and provide an accurate answer. The new tool uses Mistral's open-source AI model as its base, but is also compatible with other AI models, particularly Meta's Llama 2. To fully operate Chat with RTX, you need a significant amount of memory – from 50 to 100 GB. It supports various formats (PDF, DOC, DOCX, XML) and even transcription of YouTube videos.

The main limitation of Chat with RTX is its inability to preserve the context of the dialogue. For example, you asked, "What was the wild goat population in New Zealand in 2010?" Then you decide to ask the follow-up question, “What color is their fur?” The chatbot will not understand that you are talking about the goats from the previous question. This deficiency can affect the consistency and depth of communication with the system.

A report released at the recent World Economic Forum highlighted the prospect of rapid growth in technologies that enable GenAI models to operate autonomously. Experts made this conclusion based on their key advantages: a high level of confidentiality, absence of delays and lower costs compared to cloud counterparts.