|
This video is part of the appearance, “Selector AI Presents at Networking Field Day 35“. It was recorded as part of Networking Field Day 35 at 13:30-15:00 on July 11, 2024.
Watch on YouTube
Watch on Vimeo
An LLM differs from a function in that it takes output and imputes, or infers, a function and its arguments. We first consider how this process works within Selector for an English phrase converted to a query. We then step through the design of Selector’s LLM, which relies on a base LLM trained with English phrases and SQL translation, then fine-tuned, on-premises, with customer-specific entities. In this way, each of Selector’s deployments relies on an LLM tailored to the customer at hand.
Personnel: Nitin Kumar