Building Reflexive Agents on Kairon Using DIET Classifier and Large Language Models

By kairon
Updated on Nov 18 2024

Introduction


In the realm of conversational AI, balancing flexibility with control is essential for creating agents that are both responsive and reliable. Kairon offers a range of configurable agent types, from deterministic rule-based bots to complex LLM-driven agents, allowing for a tailored approach based on specific use cases. By incorporating reflexive learning techniques, Kairon enables agents to improve over time, enhancing accuracy and user satisfaction. In this post, we’ll explore the types of agent flows you can build on Kairon, the underlying mechanisms of DIET and LLM integration, and how reflexive agents can benefit from these combined elements.

Classifier

For readers new to these concepts, here’s a quick glossary of terms used in this post:

  1. LLM (Large Language Model): A machine learning model trained on vast datasets to understand and generate human-like text. Examples include GPT-3 and similar AI models capable of handling a wide range of conversational inputs.
  2. DIET Classifier: A model designed to classify user intent and recognize entities within text, directing conversational flows. In Kairon, the DIET classifier acts as the first layer of classification, deciding which queries
    should be handled by rule-based actions versus LLM-driven actions.
  3. Intent: The underlying goal or purpose of a user’s message. For example, in “What’s the weather today?”, the intent is to get information about the weather.
  4. Fallback: When a bot is unable to identify the user’s intent or provide an appropriate response, it falls back, usually offering a generic response or seeking clarification.
  5. Reflexive Agent: An AI agent that can analyze its past interactions, identify areas for improvement, and adapt over time by adjusting its responses or adding new capabilities based on feedback.

Types of Agent Flows on Kairon:

Kairon supports four primary types of agent flows, each with distinct capabilities and suitable use cases:

  • Characteristics: Rule bots follow fixed input-output patterns, which makes them deterministic and highly predictable. They don’t require training data because they are designed to respond to specific, predefined phrases or keywords.
  • Ideal For: Simple, repetitive tasks where responses don’t vary, such as FAQs, greetings, or basic information retrieval.
  • Benefits: Easy to set up and use, rule bots provide a streamlined user experience with little risk of error.
  • Characteristics: These bots rely on the DIET classifier to group various inputs into specific categories or “buckets.” With medium predictability, classifier bots can handle a range of intents, but they do need training examples for each category to ensure accuracy.
  • Ideal For: Scenarios where a moderate level of flexibility is needed, such as customer support for varied but predictable requests.
  • Benefits: A simple user flow that allows for some adaptability while maintaining a clear, rule-based structure for managing interactions.
  • Characteristics: LLM agents are built on large language models and can handle a wide range of inputs and outputs. They’re less predictable than rule or classifier bots, as they are trained to generate responses in natural language rather than follow strict rules.
  • Ideal For: Complex interactions requiring dynamic responses, such as open-ended inquiries or nuanced conversations that can’t be addressed with simple rules or categories.
  • Benefits: High flexibility and adaptability, allowing for responses that feel conversational and nuanced, but requires extensive training and fine-tuning.
  • Characteristics: Hybrid agents combine elements of rule-based flows with LLM-powered flexibility. They are set up to accept a broad range of inputs but provide deterministic outputs, enhancing predictability and reducing error rates.
  • Ideal For: Use cases needing a balance of predictability and adaptability, such as systems that handle complex inquiries but where certain questions or responses must follow a consistent structure.
  • Benefits: By combining flexibility with controlled responses, hybrid agents offer a balanced user experience that maintains both depth and predictability.
  • Classifying and Tagging Intents: The DIET classifier acts as a “gatekeeper,” tagging each user input with an intent and routing it to the appropriate action type—LLM or rule-based—depending on the complexity and nature of the query.
  • Hallucination Prevention: By routing non-LLM-worthy queries through rule based or classifier flows, Kairon reduces the chance of hallucination, ensuring a higher standard of relevance and accuracy in responses.
  • Monitoring Fallbacks: When the DIET classifier encounters a message it can’t categorize confidently, it logs this as a fallback. By tracking fallbacks, Kairon gains valuable data on areas where the classifier or the overall agent setup may need refinement.
  • LLM-Assisted Feedback Loop: Periodically, an LLM reviews past chat interactions, analyzing fallbacks and misclassifications. This analysis identifies whether specific intents need more training data or if new stories should be created to better cover common fallback scenarios.
  • Continuous Learning and Adaptation: Based on the LLM’s analysis, Kairon can automatically add training examples or generate new stories, which are then reviewed by human agents to ensure quality before going live. This reflexive approach ensures that the DIET classifier is continuously evolving, improving its accuracy and reducing fallback occurrences over time.

Why This Reflexive Schema Matters

This architecture exemplifies reflexivity by enabling the agent to learn from past interactions, identify weak spots, and adapt iteratively. Rather than being static, the system dynamically enhances its responses and increases its coverage, adapting to emerging patterns in user queries. The benefits of this schema include:

  • Increased Accuracy: By adding data based on real-world interactions, the DIET classifier’s accuracy improves with time, creating a more reliable conversational flow.
  • Controlled Flexibility: Through targeted LLM use and fallback handling, the agent maintains a balance of flexibility and predictability.
  • Scalability: As user needs evolve, the reflexive framework enables agents to adjust seamlessly without requiring complete retraining, allowing them to handle a broader set of queries effectively.

Conclusion

Kairon’s reflexive agent framework, combining DIET classifiers with selective LLM integration, offers a robust solution for building adaptable and reliable conversational agents. With the ability to dynamically enhance accuracy and prevent hallucinated responses, this setup makes Kairon a powerful tool for conversational AI developers seeking to maximize relevance and quality in their chatbot interactions. By incorporating reflexive learning, Kairon ensures that agents evolve based on real-world use, meeting user needs with increasing sophistication and effectiveness.