Conversations and Request Executions
Interactions with Wikit Semantics LLM apps can take two distinct forms: individual request executions and full conversations. Understanding these concepts is essential to get the most out of the platform.
Request Executions
A request execution represents a one-time, autonomous interaction with an LLM app. It consists of the following elements:
- The initial request submitted by the user
- The processing performed by the LLM app
- The generated response
- Associated metadata (timestamp, identifiers, etc.)
- Any user feedback
Each execution is independent and constitutes a complete unit of interaction in itself. This approach is particularly suitable for use cases requiring one-off responses or specific processing.
Conversations
A conversation represents a series of connected interactions between a user and a conversational LLM app. It is characterized by:
- A conversational context maintained throughout the exchange
- A chronological succession of messages
- The preservation of interaction history
- The possibility of cross-references between messages
- The maintenance of thematic coherence
Conversations allow for more natural and elaborate interactions, where each message can build on the context of previous exchanges.
Nota bene: Technically speaking, a conversation is a set of request executions.
Traceability and Analysis
Wikit Semantics systematically retains the history of interactions, whether individual executions or complete conversations. This traceability allows for:
- Analysis of LLM app performance
- Identification of usage patterns
- Continuous improvement of responses
- Monitoring user satisfaction via feedback
- Export of data for in-depth analysis
Management and Exploitation
The platform offers several features to effectively exploit this data:
- Search and filtering of interactions
- Analysis of user feedback
- Export of data for external analysis
- Visualization of usage trends
- Identification of improvement opportunities
These management and analysis capabilities are essential for optimizing LLM app performance and ensuring a quality user experience.