Extend feedback options in Interactive tester
under review
D
Daffodil Reptile
Currently, giving a thumbs down on an AI Agent response offers limited feedback. To improve learning and response quality, we propose adding predefined feedback categories (e.g. factually incorrect, incomplete, or poor style) and an optional text field. This makes it easier for users to clarify what went wrong without writing full new instructions.
Similar to ChatGPT, this system would help identify patterns in user dissatisfaction and offer the AI Agent clearer input to improve future answers. It would also save time, as users wouldn’t need to update domain knowledge or instructions each time a small tweak is needed.
This post was marked as
under review
Fleur Nouwens
Hello Daffodil Reptile! I have a few more questions for you:
- What specific types of feedback options would be most useful for your needs?
- How often do you find the need to provide detailed feedback beyond a simple thumbs up or down?
- Would you prefer predefined feedback categories or the ability to write custom feedback?