diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 00000000..32818ff7 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,41 @@ +# Semantic Workbench Developer Guidelines + +## Common Commands +* Build/Install: `make install` (recursive for all subdirectories) +* Format: `make format` (runs ruff formatter) +* Lint: `make lint` (runs ruff linter) +* Type-check: `make type-check` (runs pyright) +* Test: `make test` (runs pytest) +* Single test: `uv run pytest tests/test_file.py::test_function -v` +* Frontend: `cd workbench-app && pnpm dev` (starts dev server) +* Workbench service: `cd workbench-service && python -m semantic_workbench_service.start` + +## Code Style +### Python +* Indentation: 4 spaces +* Line length: 120 characters +* Imports: stdlib → third-party → local, alphabetized within groups +* Naming: `snake_case` for functions/variables, `CamelCase` for classes, `UPPER_SNAKE_CASE` for constants +* Types: Use type annotations consistently; prefer Union syntax (`str | None`) for Python 3.10+ +* Documentation: Triple-quote docstrings with param/return descriptions + +### C# (.NET) +* Naming: `PascalCase` for classes/methods/properties, `camelCase` for parameters/local variables, `_camelCase` for private fields +* Error handling: Use try/catch with specific exceptions, `ConfigureAwait(false)` with async +* Documentation: XML comments for public APIs +* Async: Use async/await consistently with cancellation tokens + +### TypeScript/React (Frontend) +* Component files: Use PascalCase for component names and files (e.g., `MessageHeader.tsx`) +* Hooks: Prefix with 'use' (e.g., `useConversationEvents.ts`) +* CSS: Use Fluent UI styling with mergeStyle and useClasses pattern +* State management: Redux with Redux Toolkit and RTK Query +* Models: Define strong TypeScript interfaces/types + +## Tools +* Python: Uses uv for environment/dependency management +* Linting/Formatting: Ruff (Python), ESLint (TypeScript) +* Type checking: Pyright (Python), TypeScript compiler +* Testing: pytest (Python), React Testing Library (Frontend) +* Frontend: React, Fluent UI v9, Fluent Copilot components +* Package management: uv (Python), pnpm (Frontend) \ No newline at end of file diff --git a/docs/ASSISTANT_DEVELOPMENT_GUIDE.md b/docs/ASSISTANT_DEVELOPMENT_GUIDE.md index b223904e..abb042d4 100644 --- a/docs/ASSISTANT_DEVELOPMENT_GUIDE.md +++ b/docs/ASSISTANT_DEVELOPMENT_GUIDE.md @@ -2,59 +2,332 @@ ## Overview -For assistants to be instantiated in Semantic Workbench, you need to implement an assistant service that the workbench can talk with via http. +For assistants to be instantiated in Semantic Workbench, you need to implement an assistant service that the workbench can talk with via HTTP. Assistants in Semantic Workbench follow a modern architecture with support for extensions, event-driven interactions, and flexible configuration. -We provide several python base classes to make this easier: [semantic-workbench-assistant](../libraries/python/semantic-workbench-assistant/README.md) +We provide several Python base classes to make this easier: [semantic-workbench-assistant](../libraries/python/semantic-workbench-assistant/README.md) Example assistant services: -- [Canonical assistant example](../libraries/python/semantic-workbench-assistant/semantic_workbench_assistant/canonical.py) -- Python assistant [python-01-echo-bot](../examples/python/python-01-echo-bot/README.md) and [python-02-simple-chatbot](../examples/python/python-02-simple-chatbot/README.md) -- .NET agent [dotnet-01-echo-bot](../examples/dotnet/dotnet-01-echo-bot/README.md), [dotnet-02-message-types-demo](../examples/dotnet/dotnet-02-message-types-demo/README.md) and [dotnet-03-simple-chatbot](../examples/dotnet/dotnet-03-simple-chatbot/README.md) +- [Canonical assistant example](../libraries/python/semantic-workbench-assistant/semantic_workbench_assistant/canonical.py) - A simple starting point +- Python assistants: [python-01-echo-bot](../examples/python/python-01-echo-bot/README.md) and [python-02-simple-chatbot](../examples/python/python-02-simple-chatbot/README.md) +- .NET agents: [dotnet-01-echo-bot](../examples/dotnet/dotnet-01-echo-bot/README.md), [dotnet-02-message-types-demo](../examples/dotnet/dotnet-02-message-types-demo/README.md) and [dotnet-03-simple-chatbot](../examples/dotnet/dotnet-03-simple-chatbot/README.md) +- Advanced assistants: [explorer-assistant](../assistants/explorer-assistant) and [codespace-assistant](../assistants/codespace-assistant) ## Top level concepts -RECOMMENDED FOR PYTHON: Use the `semantic-workbench-assistant` classes to create your assistant service. These classes provide a lot of the boilerplate code for you. +### AssistantApp -See the [semantic-workbench-assistant.assistant_app.AssistantApp](../libraries/python/semantic-workbench-assistant/semantic_workbench_assistant/assistant_app/assistant.py) for the classes -and the Python [python-01-echo-bot](../examples/python/python-01-echo-bot/README.md) for an example of how to use them. +RECOMMENDED FOR PYTHON: Use the `AssistantApp` class from the `semantic-workbench-assistant` package to create your assistant service. This class provides a robust framework for building assistants with event handling, configuration management, and extension support. -### assistant_service.FastAPIAssistantService +```python +from semantic_workbench_assistant.assistant_app import AssistantApp -Your main job is to implement a service that supports all the Semantic Workbench methods. The [Canonical assistant example](../libraries/python/semantic-workbench-assistant/semantic_workbench_assistant/canonical.py) demonstrates a minimal implementation. +# Create the AssistantApp instance +assistant = AssistantApp( + assistant_service_id="your-assistant-id", + assistant_service_name="Your Assistant Name", + assistant_service_description="Description of your assistant", + config_provider=your_config_provider, + content_interceptor=content_safety, # Optional content safety +) -It implements an assistant service that inherits from FastAPIAssistantService: +# Create a FastAPI app from the assistant +app = assistant.fastapi_app() +``` -`class CanonicalAssistant(assistant_service.FastAPIAssistantService):` +This approach provides a complete assistant service with event handlers, configuration management, and extension support. -This service is now a FastAPIAssistantService containing all the assistant methods that can be overridden as desired. +### Configuration with Pydantic Models -## Assistant service development: general steps +Assistant configurations are defined using Pydantic models with UI annotations for rendering in the workbench interface: -- Create a fork of this repository by clicking the "Fork" button at the top right of the repository page -- Set up your dev environment - - SUGGESTED: Use GitHub Codespaces for a quick, easy, and consistent dev - environment: [/.devcontainer/README.md](../.devcontainer/README.md) - - ALTERNATIVE: Local setup following the [main README](../README.md#local-development-environment) -- Create a directory for your projects. If you create this in the repo root, any assistant example projects will already have the correct relative paths set up to access the `semantic-workbench-*` packages or libraries. -- Create a project for your new assistant in your projects directory, e.g. `//` -- Getting started with the assistant service - - Copy skeleton of an existing project: e.g. one of the projects from the [examples](../examples) directory - - Alternatively, consider using the canonical assistant as a starting point if you want to implement a new custom base - - Follow the `README.md` in the example project to get started - - If the project has a `.env.example` file, copy it to `.env` and update the values as needed -- Build and Launch assistant. Run workbench service. Run workbench app. Add assistant local url to workbench via UI. -- NOTE: See additional documentation in [/workbench-app/docs](../workbench-app/docs/) regarding app features that can be used in the assistant service. +```python +from pydantic import BaseModel, Field +from typing import Annotated -## Assistant service deployment +class AssistantConfigModel(BaseModel): + welcome_message: Annotated[ + str, + Field( + title="Welcome Message", + description="Message sent when the assistant joins a conversation" + ) + ] = "Hello! I'm your assistant. How can I help you today?" + + only_respond_to_mentions: Annotated[ + bool, + Field( + title="Only respond to @mentions", + description="Only respond when explicitly mentioned" + ) + ] = False +``` + +### Event Handling System + +Assistants use a decorator-based event system to respond to conversation events: + +```python +@assistant.events.conversation.message.chat.on_created +async def on_message_created( + context: ConversationContext, event: ConversationEvent, message: ConversationMessage +) -> None: + # Handle new chat messages here + await context.send_messages( + NewConversationMessage( + content="I received your message!", + message_type=MessageType.chat + ) + ) + +@assistant.events.conversation.on_created +async def on_conversation_created(context: ConversationContext) -> None: + # Send welcome message when assistant joins a conversation + await context.send_messages( + NewConversationMessage( + content="Hello! I'm your assistant.", + message_type=MessageType.chat + ) + ) +``` + +## Extensions + +Semantic Workbench assistants can leverage various extensions to enhance functionality: + +### Artifacts Extension + +Create and manage rich content artifacts within conversations: + +```python +from assistant_extensions.artifacts import ArtifactsExtension, ArtifactsConfigModel + +artifacts_extension = ArtifactsExtension( + assistant=assistant, + config_provider=artifacts_config_provider +) +``` + +### Attachments Extension + +Process files uploaded during conversations: + +```python +from assistant_extensions.attachments import AttachmentsExtension + +attachments_extension = AttachmentsExtension(assistant=assistant) + +# Use in message handler +messages = await attachments_extension.get_completion_messages_for_attachments(context, config) +``` + +### Workflows Extension + +Define and execute multi-step automated workflows: + +```python +from assistant_extensions.workflows import WorkflowsExtension, WorkflowsConfigModel + +workflows_extension = WorkflowsExtension( + assistant=assistant, + content_safety_metadata_key="content_safety", + config_provider=workflows_config_provider +) +``` + +### MCP Tools Extension + +Connect to Model Context Protocol (MCP) servers for extended functionality: + +```python +from assistant_extensions.mcp import establish_mcp_sessions, retrieve_mcp_tools_from_sessions +from contextlib import AsyncExitStack + +async with AsyncExitStack() as stack: + mcp_sessions = await establish_mcp_sessions(config, stack) + tools = retrieve_mcp_tools_from_sessions(sessions, config) +``` + +## Content Safety + +Assistants can implement content safety evaluators to ensure safe interactions: + +```python +from content_safety.evaluators import CombinedContentSafetyEvaluator +from semantic_workbench_assistant.assistant_app import ContentSafety + +async def content_evaluator_factory(context: ConversationContext) -> ContentSafetyEvaluator: + config = await assistant_config.get(context.assistant) + return CombinedContentSafetyEvaluator(config.content_safety_config) + +content_safety = ContentSafety(content_evaluator_factory) +``` + +## LLM Integration + +Assistants can support multiple LLM providers, including OpenAI and Anthropic: + +```python +# OpenAI example +from openai_client import create_client, OpenAIServiceConfig, OpenAIRequestConfig + +client = create_client( + service_config=OpenAIServiceConfig(api_key=api_key), + request_config=OpenAIRequestConfig(model="gpt-4o") +) + +# Anthropic example +from anthropic_client import create_client, AnthropicServiceConfig, AnthropicRequestConfig + +client = create_client( + service_config=AnthropicServiceConfig(api_key=api_key), + request_config=AnthropicRequestConfig(model="claude-3-opus-20240229") +) +``` + +## Common Patterns + +### Message Response Logic + +Implement filtering for messages the assistant should respond to: + +```python +async def should_respond_to_message(context: ConversationContext, message: ConversationMessage) -> bool: + # Ignore messages directed at other participants + if message.metadata.get("directed_at") and message.metadata["directed_at"] != context.assistant.id: + return False + + # Only respond to mentions if configured + if config.only_respond_to_mentions and f"@{context.assistant.name}" not in message.content: + # Notify user if needed + return False + + return True +``` + +### Status Management + +Update the assistant's status during processing: + +```python +async with context.set_status("thinking..."): + # Process the message + await generate_response(context) +``` + +### Error Handling + +Implement robust error handling with debug metadata: + +```python +try: + await respond_to_conversation(context, config) +except Exception as e: + logger.exception(f"Exception occurred responding to conversation: {e}") + deepmerge.always_merger.merge(metadata, {"debug": {"error": str(e)}}) + await context.send_messages( + NewConversationMessage( + content="An error occurred. View debug inspector for more information.", + message_type=MessageType.notice, + metadata=metadata + ) + ) +``` + +## Frontend Integration + +Assistants can integrate with the Semantic Workbench frontend app to provide a rich user experience. The frontend is built using React/TypeScript with Fluent UI components. + +### Message Types + +The workbench app supports several message types that assistants can use for different purposes: + +- **Chat**: Standard conversation messages (primary communication) +- **Notice**: System-like messages that display as a single line +- **Note**: Messages that provide additional information outside the conversation flow +- **Log**: Messages that don't appear in the UI but are available to assistants +- **Command/Command Response**: Special messages prefixed with `/` to invoke commands + +```python +await context.send_messages( + NewConversationMessage( + content="This is a system notice", + message_type=MessageType.notice, + metadata={"attribution": "System"} + ) +) +``` + +### Message Metadata + +Messages can include metadata to enhance their display and behavior: + +- **attribution**: Source information displayed after the sender name +- **debug**: Debugging information displayed in a popup +- **footer_items**: Additional information displayed at the bottom of messages +- **directed_at**: Target participant for commands +- **href**: Links for navigation within the app + +```python +metadata = { + "debug": {"tokens_used": 520, "model": "gpt-4o"}, + "footer_items": ["520 tokens used", "Response time: 1.2s"] +} +``` + +### Frontend Components + +The workbench app provides components that assistants can leverage: + +- **Content Renderers**: Support for markdown, code, mermaid diagrams, ABC notation +- **Conversation Canvas**: Interactive workspace for conversations +- **Debug Inspector**: Visualizer for message metadata and debugging +- **File Attachments**: Support for attached files and documents + +For full documentation on frontend integration, see [/workbench-app/docs](../workbench-app/docs/). + +## Assistant Service Development: Getting Started + +### Project Structure + +A typical assistant project structure: + +``` +your-assistant/ +├── Makefile +├── README.md +├── assistant/ +│ ├── __init__.py +│ ├── chat.py # Main assistant implementation +│ ├── config.py # Configuration models +│ ├── response/ # Response generation logic +│ │ ├── __init__.py +│ │ └── response.py +│ └── text_includes/ # Prompt templates and other text resources +│ └── guardrails_prompt.txt +├── pyproject.toml # Project dependencies +└── uv.lock # Lock file for dependencies +``` + +### Development Steps + +1. Create a fork of this repository +2. Set up your dev environment: + - SUGGESTED: Use GitHub Codespaces for a quick, consistent dev environment: [/.devcontainer/README.md](../.devcontainer/README.md) + - ALTERNATIVE: Local setup following the [main README](../README.md#local-development-environment) +3. Create a directory for your assistant in the appropriate location +4. Start building your assistant: + - Copy and modify an existing assistant (explorer-assistant or codespace-assistant for advanced features) + - Configure your environment variables (.env file) +5. Build and launch your assistant, then connect it to the workbench via the UI + +## Assistant Service Deployment DISCLAIMER: The security considerations of hosting a service with a public endpoint are beyond the scope of this document. Please ensure you understand the implications of hosting a service before doing so. It is not recommended to host a publicly available instance of the Semantic Workbench app. If you want to deploy your assistant service to a public endpoint, you will need to create your own Azure app registration and update the app and service files with the new app registration details. See the [Custom app registration](../docs/CUSTOM_APP_REGISTRATION.md) guide for more information. -### Deployment steps - -_TODO: Add more detailed steps, this is a high-level overview_ +### Deployment Steps - Create a new Azure app registration - Update your app and service files with the new app registration details diff --git a/libraries/python/assistant-extensions/README.md b/libraries/python/assistant-extensions/README.md index e69de29b..2634be6d 100644 --- a/libraries/python/assistant-extensions/README.md +++ b/libraries/python/assistant-extensions/README.md @@ -0,0 +1,161 @@ +# Assistant Extensions + +Extensions that enhance Semantic Workbench assistants with additional capabilities beyond the core functionality. + +## Overview + +The `assistant-extensions` library provides several modules that can be integrated with your Semantic Workbench assistants: + +- **Artifacts**: Create and manage file artifacts during conversations (markdown, code, mermaid diagrams, etc.) +- **Attachments**: Process and extract content from file attachments added to conversations +- **AI Clients**: Configure and manage different AI service providers (OpenAI, Azure OpenAI, Anthropic) +- **MCP (Model Context Protocol)**: Connect to and utilize MCP tool servers for extended functionality +- **Workflows**: Define and execute multi-step automated workflows + +These extensions are designed to work with the `semantic-workbench-assistant` framework and can be added to your assistant implementation to enhance its capabilities. + +## Module Details + +### Artifacts + +The Artifacts extension enables assistants to create and manage rich content artifacts within conversations. + +```python +from assistant_extensions.artifacts import ArtifactsExtension, ArtifactsConfigModel +from semantic_workbench_assistant import AssistantApp + +async def get_artifacts_config(context): + return ArtifactsConfigModel(enabled=True) + +# Create and add the extension to your assistant +assistant = AssistantApp(name="My Assistant") +artifacts_extension = ArtifactsExtension( + assistant=assistant, + config_provider=get_artifacts_config +) + +# The extension is now ready to create and manage artifacts +``` + +Supports content types including Markdown, code (with syntax highlighting), Mermaid diagrams, ABC notation for music, and more. + +### Attachments + +Process files uploaded during conversations, extracting and providing content to the AI model. + +```python +from assistant_extensions.attachments import AttachmentsExtension, AttachmentsConfigModel +from semantic_workbench_assistant import AssistantApp + +assistant = AssistantApp(name="My Assistant") +attachments_extension = AttachmentsExtension(assistant=assistant) + +@assistant.events.conversation.message.chat.on_created +async def handle_message(context, event, message): + config = AttachmentsConfigModel( + context_description="Files attached to this conversation" + ) + # Get attachment content to include in AI prompt + messages = await attachments_extension.get_completion_messages_for_attachments(context, config) + # Use messages in your AI completion request +``` + +Supports text files, PDFs, Word documents, and images with OCR capabilities. + +### AI Clients + +Configuration models for different AI service providers to simplify client setup. + +```python +from assistant_extensions.ai_clients.config import OpenAIClientConfigModel, AIServiceType +from openai_client import OpenAIServiceConfig, OpenAIRequestConfig + +# Configure an OpenAI client +config = OpenAIClientConfigModel( + ai_service_type=AIServiceType.OpenAI, + service_config=OpenAIServiceConfig( + api_key=os.environ.get("OPENAI_API_KEY") + ), + request_config=OpenAIRequestConfig( + model="gpt-4o", + temperature=0.7 + ) +) + +# Use this config with openai_client or anthropic_client libraries +``` + +### MCP (Model Context Protocol) + +Connect to and utilize MCP tool servers to extend your assistant with external capabilities. + +```python +from assistant_extensions.mcp import establish_mcp_sessions, retrieve_mcp_tools_from_sessions +from contextlib import AsyncExitStack + +async def setup_mcp_tools(config): + async with AsyncExitStack() as stack: + # Connect to MCP servers and get available tools + sessions = await establish_mcp_sessions(config, stack) + tools = retrieve_mcp_tools_from_sessions(sessions, config) + + # Use tools with your AI model + return sessions, tools +``` + +### Workflows + +Define and execute multi-step workflows within conversations, such as automated sequences. + +```python +from assistant_extensions.workflows import WorkflowsExtension, WorkflowsConfigModel +from semantic_workbench_assistant import AssistantApp + +async def get_workflows_config(context): + return WorkflowsConfigModel( + enabled=True, + workflow_definitions=[ + { + "workflow_type": "user_proxy", + "command": "analyze_document", + "name": "Document Analysis", + "description": "Analyze a document for quality and completeness", + "user_messages": [ + {"message": "Please analyze this document for accuracy"}, + {"message": "What improvements would you suggest?"} + ] + } + ] + ) + +assistant = AssistantApp(name="My Assistant") +workflows_extension = WorkflowsExtension( + assistant=assistant, + config_provider=get_workflows_config +) +``` + +## Integration + +These extensions are designed to enhance Semantic Workbench assistants. To use them: + +1. Configure your assistant using the `semantic-workbench-assistant` framework +2. Add the desired extensions to your assistant +3. Implement event handlers for extension functionality +4. Configure extension behavior through their respective config models + +For detailed examples, see the [Assistant Development Guide](../../docs/ASSISTANT_DEVELOPMENT_GUIDE.md) and explore the existing assistant implementations in the repository. + +## Optional Dependencies + +Some extensions require additional packages: + +``` +# For attachments support (PDF, Word docs) +pip install "assistant-extensions[attachments]" + +# For MCP tool support +pip install "assistant-extensions[mcp]" +``` + +This library is part of the Semantic Workbench project, which provides a complete framework for building and deploying intelligent assistants. \ No newline at end of file diff --git a/libraries/python/content-safety/README.md b/libraries/python/content-safety/README.md index b03b4e23..720752ea 100644 --- a/libraries/python/content-safety/README.md +++ b/libraries/python/content-safety/README.md @@ -1,9 +1,110 @@ -# Content Safety Evaluators for Semantic Workbench Assistants +# Content Safety for Semantic Workbench -Use these evaluators to ensure that the content being processed by your assistant and being displayed to users is safe and appropriate. This is especially important when dealing with user-generated or model-generated content, but can also be useful for code-generated content as well. +This library provides content safety evaluators to screen and filter potentially harmful content in Semantic Workbench assistants. It helps ensure that user-generated, model-generated, and assistant-generated content is appropriate and safe. -See the [Responsible AI FAQ](../../../RESPONSIBLE_AI_FAQ.md) for more information. +## Key Features -## Recommended +- **Multiple Providers**: Support for both Azure Content Safety and OpenAI Moderations API +- **Unified Interface**: Common API regardless of the underlying provider +- **Configuration UI**: Integration with Semantic Workbench's configuration system +- **Flexible Integration**: Easy to integrate with any assistant implementation -The recommended evaluator is the [`combined-content-safety-evaluator`](./content_safety/README.md) which is a complete package that includes all of the evaluators in this repository. Alternatively, you can use the individual evaluators if you only need one or two of them. See the README files in each evaluator's directory for more information. +## Available Evaluators + +### Combined Content Safety Evaluator + +The `CombinedContentSafetyEvaluator` provides a unified interface for using various content safety services: + +```python +from content_safety.evaluators import CombinedContentSafetyEvaluator, CombinedContentSafetyEvaluatorConfig +from content_safety.evaluators.azure_content_safety.config import AzureContentSafetyEvaluatorConfig + +# Configure with Azure Content Safety +config = CombinedContentSafetyEvaluatorConfig( + service_config=AzureContentSafetyEvaluatorConfig( + endpoint="https://your-resource.cognitiveservices.azure.com/", + api_key="your-api-key", + threshold=0.5, # Flag content with harm probability above 50% + ) +) + +# Create evaluator +evaluator = CombinedContentSafetyEvaluator(config) + +# Evaluate content +result = await evaluator.evaluate("Some content to evaluate") +``` + +### Azure Content Safety Evaluator + +Evaluates content using Azure's Content Safety service: + +```python +from content_safety.evaluators.azure_content_safety import AzureContentSafetyEvaluator, AzureContentSafetyEvaluatorConfig + +config = AzureContentSafetyEvaluatorConfig( + endpoint="https://your-resource.cognitiveservices.azure.com/", + api_key="your-api-key", + threshold=0.5 +) + +evaluator = AzureContentSafetyEvaluator(config) +result = await evaluator.evaluate("Content to check") +``` + +### OpenAI Moderations Evaluator + +Evaluates content using OpenAI's Moderations API: + +```python +from content_safety.evaluators.openai_moderations import OpenAIContentSafetyEvaluator, OpenAIContentSafetyEvaluatorConfig + +config = OpenAIContentSafetyEvaluatorConfig( + api_key="your-openai-api-key", + threshold=0.8, # Higher threshold (80%) + max_item_size=4000 # Automatic chunking for longer content +) + +evaluator = OpenAIContentSafetyEvaluator(config) +result = await evaluator.evaluate("Content to check") +``` + +## Integration with Assistants + +To integrate with a Semantic Workbench assistant: + +```python +from content_safety.evaluators import CombinedContentSafetyEvaluator +from semantic_workbench_assistant.assistant_app import ContentSafety + +# Define evaluator factory +async def content_evaluator_factory(context): + config = await assistant_config.get(context.assistant) + return CombinedContentSafetyEvaluator(config.content_safety_config) + +# Create content safety component +content_safety = ContentSafety(content_evaluator_factory) + +# Add to assistant +assistant = AssistantApp( + assistant_service_id="your-assistant", + assistant_service_name="Your Assistant", + content_interceptor=content_safety +) +``` + +## Configuration UI + +The library includes Pydantic models with UI annotations for easy integration with Semantic Workbench's configuration interface. These models generate appropriate form controls in the assistant configuration UI. + +## Evaluation Results + +Evaluation results include: +- Whether content was flagged as unsafe +- Detailed categorization (violence, sexual, hate speech, etc.) +- Confidence scores for different harm categories +- Original response from the provider for debugging + +## Learn More + +See the [Responsible AI FAQ](../../../RESPONSIBLE_AI_FAQ.md) for more information about content safety in the Semantic Workbench ecosystem. \ No newline at end of file diff --git a/libraries/python/content-safety/content_safety/README.md b/libraries/python/content-safety/content_safety/README.md index 7b233c44..e864b2f1 100644 --- a/libraries/python/content-safety/content_safety/README.md +++ b/libraries/python/content-safety/content_safety/README.md @@ -1,3 +1,25 @@ -Create separate folders for each class of content safety modules. +# Content Safety Module Internal Structure -- `content_safety/evaluators` for content safety evaluators +This directory contains the implementation of content safety evaluators for the Semantic Workbench. + +## Directory Structure + +- `evaluators/` - Base evaluator interfaces and implementations + - `azure_content_safety/` - Azure Content Safety API implementation + - `openai_moderations/` - OpenAI Moderations API implementation + +## Implementation Details + +The module is designed with a plugin architecture to support multiple content safety providers: + +1. Each provider has its own subdirectory with: + - `evaluator.py` - Implementation of the ContentSafetyEvaluator interface + - `config.py` - Pydantic configuration model with UI annotations + - `__init__.py` - Exports for the module + +2. The `CombinedContentSafetyEvaluator` serves as a factory that: + - Takes a configuration that specifies which provider to use + - Instantiates the appropriate evaluator based on the configuration + - Delegates evaluation requests to the selected provider + +This architecture makes it easy to add new providers while maintaining a consistent API. diff --git a/workbench-app/README.md b/workbench-app/README.md index 09fd5276..9055c123 100644 --- a/workbench-app/README.md +++ b/workbench-app/README.md @@ -8,67 +8,122 @@ The Semantic Workbench app is designed as a client-side, single-page application **React/TypeScript**: The app is built using the React library and TypeScript for static typing. -**Client-Server Interaction**: The app communicates with the Workbench backend service via RESTful APIs, performing AJAX requests to handle user input and display responses. +**Vite**: Build system and development server that provides fast HMR (Hot Module Replacement). -**Single Page Application (SPA)**: Ensures smooth and seamless transitions between different parts of the app without requiring full page reloads. +**Fluent UI & Fluent Copilot**: Microsoft design system components that provide a consistent look and feel. + +**Redux & Redux Toolkit**: Centralized state management with middleware for side effects. + +**React Router**: Handles navigation and URL management within the SPA. + +**Client-Server Interaction**: The app communicates with the Workbench backend service via RESTful APIs and SSE (Server-Sent Events) for real-time updates. + +**Content Rendering**: Support for rich content types including Markdown, code syntax highlighting, Mermaid diagrams, ABC notation for music, and more. ### Initialization and Authentication -The application requires user authentication, typically via Microsoft AAD or MSA accounts. Instructions for setting up the development environment are in the repo readme. To deploy in your own environment see [app registration documentation](../docs/CUSTOM_APP_REGISTRATION.md). +The application requires user authentication, typically via Microsoft AAD or MSA accounts. The app uses Microsoft Authentication Library (MSAL) for authentication flows. Instructions for setting up the development environment are in the repo readme. To deploy in your own environment see [app registration documentation](../docs/CUSTOM_APP_REGISTRATION.md). ### Connecting to the Backend Service -The Semantic Workbench app connects to the backend service using RESTful API calls. Here’s how the interaction works: +The Semantic Workbench app connects to the backend service using RESTful API calls. Here's how the interaction works: -1. **Initial Setup**: On application startup, the app establishes a connection to the backend service located at a specified endpoint. This connection requires SSL certificates, which may prompt for admin credentials when installed during local development. +1. **Initial Setup**: On application startup, the app establishes a connection to the backend service located at a specified endpoint. This connection requires SSL certificates, which are automatically generated by vite-plugin-mkcert during development. 2. **User Authentication**: Users must authenticate via Microsoft AAD or MSA accounts. This enables secure access and interaction between the app and the backend. -3. **Data Fetching**: The app makes AJAX requests to the backend service, fetching data such as message history, user sessions, and conversation context. +3. **Data Fetching & State Management**: The app uses Redux Toolkit Query to manage API requests and caching of conversation data, messages, and participant information. 4. **Event Handling**: User actions within the app (e.g., sending a message) trigger RESTful API calls to the backend, which processes the actions and returns the appropriate responses. -5. **State Management**: The backend service keeps track of the conversation state and other relevant information, enabling the app to provide a consistent user experience. +5. **Real-time Updates**: The app uses Server-Sent Events (SSE) to receive real-time updates from the backend service, enabling live updates of conversation state. -#### Error Handling +### Error Handling The app includes error handling mechanisms that notify users of any issues with the backend connection, such as authentication failures or network issues. - ## Setup Guide -The Semantic Workbench app is a React/Typescript app integrated with the [Semantic Workbench](../workbench-service/README.md) backend service. +The Semantic Workbench app is a React/TypeScript app integrated with the [Semantic Workbench](../workbench-service/README.md) backend service. -Follow the [setup guide](../docs/SETUP_DEV_ENVIRONMENT.md) to install the development tools. +### Prerequisites + +- **Node.js**: Version 20.x is required (enforced by the run script) +- **pnpm**: Used for package management +- **SSL certificates**: Automatically generated during development for HTTPS + +Follow the [setup guide](../docs/SETUP_DEV_ENVIRONMENT.md) to install all required development tools. ## Installing dependencies -In the [workbench-app](./) directory +In the [workbench-app](./) directory: ```sh make ``` +This command runs `pnpm install` to install all required dependencies. + ## Running from VS Code -To run and/or debug in VS Code, View->Run, "app: semantic-workbench-app" +To run and/or debug in VS Code: +1. Open the workspace file `semantic-workbench.code-workspace` +2. View -> Run +3. Select "app: semantic-workbench-app" ## Running from the command line -In the [workbench-app](./) directory +In the [workbench-app](./) directory: +```sh +pnpm dev +``` +or ```sh pnpm start ``` -Note: you might be prompted for admin credentials for the SSL certificates used by the app. +Then open https://127.0.0.1:4000 in your browser. + +Note: The first time you run the app, your browser may warn about the self-signed SSL certificate. + +## Development Tools + +### Available Scripts + +- `pnpm dev` / `pnpm start` - Start development server +- `pnpm build` - Build for production +- `pnpm preview` - Locally preview production build +- `pnpm lint` - Run ESLint to check code quality +- `pnpm format` / `pnpm prettify` - Format code with Prettier +- `pnpm type-check` - Run TypeScript type checking +- `pnpm find-deadcode` - Identify unused code +- `pnpm depcheck` - Check for dependency issues -## Extra information +## Documentation -### Scripts +### Application Documentation -- `pnpm start` - start dev server -- `pnpm build` - build for production -- `pnpm preview` - locally preview production build +- [App Development Guide](./docs/APP_DEV_GUIDE.md) - Guide for developing the app +- [Message Metadata](./docs/MESSAGE_METADATA.md) - Details about message metadata structure +- [Message Types](./docs/MESSAGE_TYPES.md) - Different types of messages supported +- [State Inspectors](./docs/STATE_INSPECTORS.md) - Information about state inspection tools -### More info +### Related Documentation -- [Message Metadata](./docs/MESSAGE_METADATA.md) -- [Message Types](./docs/MESSAGE_TYPES.md) -- [State Inspectors](./docs/STATE_INSPECTORS.md) +- [Workbench App Overview](../docs/WORKBENCH_APP.md) - Complete overview of the application +- [Custom App Registration](../docs/CUSTOM_APP_REGISTRATION.md) - Setting up authentication + +## Project Structure + +Key directories in the project: + +``` +workbench-app/ +├── src/ +│ ├── components/ # Reusable UI components +│ ├── models/ # TypeScript interfaces and types +│ ├── redux/ # State management +│ ├── routes/ # Application routes +│ ├── services/ # API services +│ └── libs/ # Utility functions and hooks +├── public/ # Static assets +├── docs/ # Documentation +└── certs/ # SSL certificates for development +``` \ No newline at end of file diff --git a/workbench-app/docs/APP_DEV_GUIDE.md b/workbench-app/docs/APP_DEV_GUIDE.md index 2a604518..d40fbcd1 100644 --- a/workbench-app/docs/APP_DEV_GUIDE.md +++ b/workbench-app/docs/APP_DEV_GUIDE.md @@ -1,6 +1,6 @@ # Semantic Workbench App Dev Guide -This is an early collection of notes for conventions being put in place for the development of the Semantic Workbench React/Typescript web app. +This guide covers the conventions and patterns used in the development of the Semantic Workbench React/TypeScript web app. ## Design System @@ -16,61 +16,218 @@ Fluent Copilot (formerly Fluent AI): - Docs: https://ai.fluentui.dev/ - GitHub: https://github.com/microsoft/fluentai -### Styling components +## Architecture Patterns -Create a `useClasses` function that returns an object of classnames using the `mergeStyle` function from the `@fluentui/react` package. Within your component, create a `const classes = useClasses();` and use the classnames in the component. +### Component Organization -Sample: +The app follows these organizational patterns: -``` -import { mergeStyle } from '@fluentui/react'; +- **Feature-based organization**: Components are organized by feature (Conversations, Assistants, etc.) +- **Composition**: Complex components are broken down into smaller, reusable components +- **Container/Presentation separation**: Logic and presentation are separated when possible -const useClasses = { - root: mergeStyle({ - color: 'red', - }), -}; +### State Management + +The app uses Redux Toolkit for state management: + +- **Redux store**: Central state for application data +- **Redux Toolkit Query**: Used for API integration and data fetching +- **Slices**: State is divided into slices by feature +- **Custom hooks**: Encapsulate Redux interactions (e.g., `useConversationEvents.ts`) + +## Component Guidelines -const MyButton = () => { +### Styling components + +Create styles using the `makeStyles` function from the `@fluentui/react-components` package: + +```tsx +import { makeStyles, shorthands, tokens } from '@fluentui/react-components'; + +// Define styles as a hook function +const useClasses = makeStyles({ + root: { + display: 'flex', + backgroundColor: tokens.colorNeutralBackground3, + ...shorthands.padding(tokens.spacingVerticalM), + ...shorthands.borderRadius(tokens.borderRadiusMedium), + }, + content: { + display: 'flex', + flexDirection: 'column', + gap: tokens.spacingVerticalS, + }, +}); + +const MyComponent = () => { + // Use the styles in your component const classes = useClasses(); return (
- +
+ +
); }; ``` -Docs: - -- Fluent: Styling components: https://react.fluentui.dev/?path=/docs/concepts-developer-styling-components--docs +Documentation: +- Fluent styling components: https://react.fluentui.dev/?path=/docs/concepts-developer-styling-components--docs - Griffel: https://griffel.js.org/ ### Z-index -Use the Fluent tokens for z-index. +Use Fluent tokens for z-index values to maintain consistency: -- zIndex values - - .zIndexBackground = 0 - - .zIndexContent = 1 - - .zIndexOverlay = 1000 - - .zIndexPopup = 2000 - - .zIndexMessage = 3000 - - .zIndexFloating = 4000 - - .zIndexPriority = 5000 - - .zIndexDebug = 6000 +```tsx +import { makeStyles, tokens } from '@fluentui/react-components'; -Sample: +const useClasses = makeStyles({ + overlay: { + position: 'absolute', + zIndex: tokens.zIndexOverlay, + }, +}); +``` +Z-index token values: +- `tokens.zIndexBackground` = 0 +- `tokens.zIndexContent` = 1 +- `tokens.zIndexOverlay` = 1000 +- `tokens.zIndexPopup` = 2000 +- `tokens.zIndexMessage` = 3000 +- `tokens.zIndexFloating` = 4000 +- `tokens.zIndexPriority` = 5000 +- `tokens.zIndexDebug` = 6000 + +## Common Patterns + +### Custom Hooks + +Create custom hooks to encapsulate reusable logic: + +```tsx +import { useCallback, useState } from 'react'; + +export function useToggle(initialState = false) { + const [state, setState] = useState(initialState); + + const toggle = useCallback(() => { + setState(state => !state); + }, []); + + return [state, toggle]; +} ``` -import { mergeStyles, tokens } from '@fluentui/react'; -const useClasses = { - root: mergeStyle({ - position: 'relative', - zIndex: tokens.zIndexContent, - }), +# Design Principles and Technical Standards + +## Form and Configuration UIs + +The Semantic Workbench uses React JSON Schema Form (@rjsf) with Fluent UI bindings as the **standard approach** for all configuration UIs. This is an intentional architectural decision that: + +1. Ensures consistency across the application +2. Allows rapid development through declarative UI +3. Supports runtime-generated forms based on server schemas +4. Maintains compatibility with our existing components + +### Standard Implementation + +All configuration and form UIs **must** use the RJSF approach with Fluent UI: + +```tsx +import Form from '@rjsf/fluentui-rc'; +import validator from '@rjsf/validator-ajv8'; +import { RJSFSchema } from '@rjsf/utils'; + +// Use our customized templates +import { CustomizedFieldTemplate } from '../App/FormWidgets/CustomizedFieldTemplate'; +import { CustomizedObjectFieldTemplate } from '../App/FormWidgets/CustomizedObjectFieldTemplate'; +import { CustomizedArrayFieldTemplate } from '../App/FormWidgets/CustomizedArrayFieldTemplate'; + +const schema: RJSFSchema = { + type: 'object', + properties: { + name: { type: 'string', title: 'Name' }, + description: { type: 'string', title: 'Description' } + } }; + +const FormComponent = ({ onSubmit }) => { + return ( +
+ ); +}; +``` + +### Extending Form Functionality + +When customization is needed, extend the standard approach through: + +1. **Custom Widgets**: Create specialized widgets in the `FormWidgets` directory +2. **Custom Templates**: Extend existing templates rather than creating new ones +3. **UISchema**: Use UISchema for layout/appearance changes without custom code + +```tsx +// Example of registering a custom widget +const widgets: RegistryWidgetsType = { + BaseModelEditor: BaseModelEditorWidget, + Inspectable: InspectableWidget +}; + + +``` + +This standardized approach is a core architectural principle - any significant UI improvements should work within this framework rather than introducing alternative patterns. + +### Error Handling + +Use consistent error handling patterns: + +```tsx +try { + // Operation that might fail +} catch (error) { + // Log the error + console.error('Failed to perform operation', error); + + // Show error notification to user + notifyError('Operation failed', error.message); +} +``` + +## Accessibility + +- Use semantic HTML elements (`button`, `nav`, `header`, etc.) +- Ensure proper keyboard navigation +- Add appropriate ARIA attributes +- Maintain sufficient color contrast +- Support screen readers with meaningful labels + +## Testing + +The app uses React Testing Library for component testing: + +```tsx +import { render, screen, fireEvent } from '@testing-library/react'; + +test('button click increments counter', () => { + render(); + const button = screen.getByRole('button', { name: /increment/i }); + fireEvent.click(button); + expect(screen.getByText('Count: 1')).toBeInTheDocument(); +}); ``` diff --git a/workbench-app/docs/MESSAGE_METADATA.md b/workbench-app/docs/MESSAGE_METADATA.md index 3f669641..1948dfdb 100644 --- a/workbench-app/docs/MESSAGE_METADATA.md +++ b/workbench-app/docs/MESSAGE_METADATA.md @@ -133,7 +133,7 @@ Example: } ], "tool_result": { - "tool_call_id": "tool_result_1", + "tool_call_id": "tool_call_1", }, } } diff --git a/workbench-app/docs/MESSAGE_TYPES.md b/workbench-app/docs/MESSAGE_TYPES.md index 6242a3a1..02f7dbe4 100644 --- a/workbench-app/docs/MESSAGE_TYPES.md +++ b/workbench-app/docs/MESSAGE_TYPES.md @@ -34,4 +34,4 @@ Any `chat` messages that start with a `/` will be automatically converted to a ` Optionally, `directed_at` metadata may be populated via the input UX or by the assistant that generates the command. The `directed_at` metadata is used to specify the target of the command. It is up to the assistant to interpret the `directed_at` metadata and decide how to handle the command. For example, you code your assistant to only respond to commands that are directed at it. Note that all commands are sent to all assistants in the conversation and each can choose to respond or ignore the command, regardless of the `directed_at` metadata. -Assistants should respond to `command` messages with a `command_response` message. The `command_response` message should contain the response to the command. The app will render the `command_response` message differently than a `chat` message and is also optionally considered or not as part of the chat history whenever performing options that consider the history of the conversation. +Assistants should respond to `command` messages with a `command-response` message. The `command-response` message should contain the response to the command. The app will render the `command-response` message differently than a `chat` message and is also optionally considered or not as part of the chat history whenever performing options that consider the history of the conversation. diff --git a/workbench-app/docs/README.md b/workbench-app/docs/README.md new file mode 100644 index 00000000..0ba7b138 --- /dev/null +++ b/workbench-app/docs/README.md @@ -0,0 +1,25 @@ +# Workbench App Documentation + +This directory contains documentation specific to the Semantic Workbench React application. + +## Core Documentation + +- [App Development Guide](./APP_DEV_GUIDE.md) - Comprehensive guide for developing the application +- [Message Types](./MESSAGE_TYPES.md) - Different message types supported in the system +- [Message Metadata](./MESSAGE_METADATA.md) - Metadata structure and usage for messages +- [State Inspectors](./STATE_INSPECTORS.md) - Using and implementing state inspectors + +## Key Development Principles + +The Semantic Workbench application follows these key principles: + +1. **Component Library**: Uses Fluent UI React v9 and Fluent Copilot components +2. **Form Generation**: Uses React JSON Schema Form (@rjsf) for all configuration interfaces +3. **State Management**: Uses Redux Toolkit for centralized state +4. **Type Safety**: Leverages TypeScript throughout the codebase + +See the [App Development Guide](./APP_DEV_GUIDE.md) for detailed guidance on these principles. + +## Contributing + +When contributing to the Workbench App, please follow the existing patterns and principles documented here. Pay special attention to the [Design Principles and Technical Standards](./APP_DEV_GUIDE.md#design-principles-and-technical-standards) section. \ No newline at end of file diff --git a/workbench-app/docs/STATE_INSPECTORS.md b/workbench-app/docs/STATE_INSPECTORS.md index de048623..01970968 100644 --- a/workbench-app/docs/STATE_INSPECTORS.md +++ b/workbench-app/docs/STATE_INSPECTORS.md @@ -1,13 +1,106 @@ # State Inspectors -Each assistant can have one or more state inspectors. A state inspector is a component that can be used to inspect the state of the assistant. The `config` editor for an assistant is an example of a special state inspector that is required for each assistant. +## Overview -States beyond the required `config` state exposed by an assistant will be available in the assistant's conversation view. Clicking on the `Show Inspectors` UI will cause a tabbed view to be rendered, with each tab mapping to an exposed state. The inspector view will attempt to render the state in the most user-friendly way, based upon the content/data provided. +State inspectors provide a way to visualize and interact with an assistant's internal state. Each assistant can have multiple state inspectors, with the `config` editor being a required inspector for all assistants. -- If the state `data` property contains a key of `content` that is a string, it will be rendered as text, supporting markdown and/or html formatting, in addition to plain text. -- If there is a `JsonSchema` property in the state, it will be rendered as custom UI, based upon the schema. If `UISchema` is also provided, it will be used to customize the UI. -- Lastly, the state will be rendered as formatted JSON. +State inspectors can be used for: +- Debugging assistant behavior +- Monitoring internal state changes +- Providing interactive interfaces for modifying assistant state +- Exposing data and attachments for user inspection -If `JsonSchema` is provided, the state inspector will also provide a button to allow the user to edit the state. The user will be presented with a dialog that will allow them to edit the state. The dialog will be rendered based upon the `JsonSchema` and `UISchema` properties of the state. The user will be able to save the changes, or cancel the dialog. If the user saves the changes, the state will be updated, and the assistant will be notified of the change. +## Accessing State Inspectors -Assistant state change events can be fired to cause the inspector UI to be updated for real-time state inspection. This can be useful for debugging purposes, or for providing a real-time view of the state of the assistant. +State inspectors are available in the assistant's conversation view. To access them: + +1. Join a conversation with an assistant +2. Click the `Show Inspectors` button in the conversation interface +3. A tabbed view will appear with each tab representing a different state + +## Rendering Methods + +The inspector view will render state based on its content: + +1. **Content-based Rendering**: If the state's `data` property contains a `content` key with a string value, it will be rendered as: + - Markdown for formatted text + - HTML for rich content + - Plain text for simple content + +2. **Schema-based Rendering**: If the state includes a `JsonSchema` property, a custom UI will be generated based on the schema: + - Form elements will be created for each schema property + - If a `UISchema` property is provided, it will customize the UI appearance + - Validation will follow the schema rules + +3. **JSON Fallback**: If neither of the above applies, the state will be rendered as formatted JSON. + +## Interactive State Editing + +When a state includes a `JsonSchema` property, the inspector provides editing capabilities: + +1. An edit button will appear in the inspector +2. Clicking it opens a dialog with form elements based on the schema +3. Users can modify values and save changes +4. Changes are sent to the assistant, which is notified of the update + +## Attachments Support + +State inspectors can include file attachments: + +1. If a state contains `attachments` data, files will be displayed with download options +2. Users can download attachments for local inspection +3. Common file types may have preview capabilities + +## Implementing State Inspectors + +Assistants can implement state inspectors by: + +1. Defining data models with appropriate JSON schemas +2. Exposing state through the assistant service API +3. Handling state change notifications from the user interface +4. Updating state in response to internal events + +## Example Schema + +```json +{ + "JsonSchema": { + "type": "object", + "properties": { + "name": { + "type": "string", + "title": "Name" + }, + "enabled": { + "type": "boolean", + "title": "Enabled" + }, + "settings": { + "type": "object", + "properties": { + "threshold": { + "type": "number", + "title": "Threshold", + "minimum": 0, + "maximum": 1 + } + } + } + } + }, + "UISchema": { + "settings": { + "ui:expandable": true + } + }, + "data": { + "name": "My Assistant", + "enabled": true, + "settings": { + "threshold": 0.7 + } + } +} +``` + +State inspectors provide powerful debugging and interaction capabilities for both developers and users of the Semantic Workbench platform. diff --git a/workbench-service/README.md b/workbench-service/README.md index c76ff2ae..d4e281c3 100644 --- a/workbench-service/README.md +++ b/workbench-service/README.md @@ -4,52 +4,169 @@ The Semantic Workbench service consists of several key components that interact to provide a seamless user experience: -**Workbench Service**: A backend Python service that handles state management, user interactions, and assists in broker functionalities. +**Workbench Service**: A backend Python service that handles state management, user interactions, conversation history, file storage, and real-time event distribution. -[**Workbench App**](../workbench-app): A single-page web application written in TypeScript and React, compiled into static HTML that runs in the user’s browser. +[**Workbench App**](../workbench-app): A single-page web application written in TypeScript and React that provides the user interface for interacting with assistants. -**FastAPI Framework**: Utilized for the HTTP API, providing endpoints and continuous communication between the Workbench and assistants. +**FastAPI Framework**: Powers the HTTP API and Server-Sent Events (SSE) for real-time updates between clients and assistants. -**Assistants**: Independently developed units that connect to the Workbench Service through a RESTful API. Assistants can manage their own state and handle connections to various language models. +**Database Layer**: Uses SQLModel (SQLAlchemy) with support for both SQLite (development) and PostgreSQL (production) for persistent storage. + +**Authentication**: Integrated Azure AD/Microsoft authentication with JWT token validation. + +**Assistants**: Independently developed services that connect to the Workbench through a RESTful API, enabling AI capabilities. ![Architecture Diagram](../docs/images/architecture-animation.gif) +### Core Components + +- **Controller Layer**: Implements business logic for conversations, assistants, files, and users +- **Database Models**: SQLModel-based entities for storing application state +- **Authentication**: Azure AD integration with JWT validation +- **File Storage**: Versioned file system for conversation attachments and artifacts +- **Event System**: Real-time event distribution using Server-Sent Events (SSE) +- **API Layer**: RESTful endpoints for all service operations + ### Communication -The communication between the Workbench and Assistants is managed through HTTP requests: +The communication between the Workbench and Assistants is managed through multiple channels: + +1. **HTTP API**: RESTful endpoints for CRUD operations and state management +2. **Server-Sent Events (SSE)**: Real-time event streaming for immediate updates +3. **Event System**: Structured event types (e.g., `message.created`, `conversation.updated`) for real-time state synchronization +4. **Webhook Callbacks**: Assistant registration with callback URLs for event delivery + +### Database Structure + +The service uses SQLModel to manage structured data: + +- **Users**: Authentication and profile information +- **Conversations**: Messaging history and metadata +- **Messages**: Different message types with content and metadata +- **Participants**: Users and assistants in conversations +- **Files**: Versioned attachments for conversations +- **Assistants**: Registered assistants and their configurations +- **Shares**: Conversation sharing capabilities + +## Features + +### Conversation Management +- Create, update, and delete conversations +- Add and remove participants +- Different message types (chat, note, notice, command) +- Message metadata and debug information -1. **Initialization**: Assistants notify the Workbench about their presence and provide a callback URL. -2. **Message Handling**: Both the Workbench and Assistants can send HTTP requests to each other as needed. -3. **Events**: There are several types of events (e.g., `message created`) that are handled through designated HTTP endpoints. +### File Management +- File attachment support +- Versioned file storage +- Multiple content types -### Agents +### Sharing +- Share conversations with other users +- Public/private share links +- Share redemption + +### Integration with Assistants +- Assistant registration and discovery +- API key management for secure communication +- Event-based communication + +### Speech Services +- Azure Speech integration for text-to-speech + +## Configuration + +The service is configured through environment variables: + +``` +# Basic configuration +WORKBENCH_SERVICE_HOST=127.0.0.1 +WORKBENCH_SERVICE_PORT=5000 + +# Database settings +WORKBENCH_SERVICE_DB_CONNECTION=sqlite+aiosqlite:///./workbench-service.db +# Or for PostgreSQL: +# WORKBENCH_SERVICE_DB_CONNECTION=postgresql+asyncpg://user:pass@host:port/dbname + +# Authentication +WORKBENCH_SERVICE_TENANT_ID=your-azure-tenant-id +WORKBENCH_SERVICE_CLIENT_ID=your-client-id + +# File storage +WORKBENCH_SERVICE_FILES_DIR=./.data/files +``` -Each assistant (or agent) is registered and maintains its connection through continuous ping requests to the Workbench. This ensures that the state information and response handling remain synchronized. +See the [environment setup guide](../docs/SETUP_DEV_ENVIRONMENT.md) for complete configuration options. ## Setup Guide -The Semantic Workbench service is a Python service that provides the backend functionality of the Semantic Workbench. +### Prerequisites -Follow the [setup guide](../docs/SETUP_DEV_ENVIRONMENT.md) to install the development tools. +- Python 3.11+ +- Access to database (SQLite for development, PostgreSQL for production) +- Azure AD application registration (for authentication) -## Installing dependencies +### Installing Dependencies -In the [workbench-service](./) directory +In the [workbench-service](./) directory: ```sh make ``` +This will use [uv](https://github.com/astral-sh/uv) to install all Python dependencies. + If this fails in Windows, try running a vanilla instance of `cmd` or `powershell` and not within `Cmder` or another shell that may have modified the environment. -## Running from VS Code +### Database Migration + +The service uses Alembic for database migrations: + +```sh +# Initialize the database +uv run alembic upgrade head +``` + +### Running from VS Code + +To run and/or debug in VS Code: +1. Open the workspace file `semantic-workbench.code-workspace` +2. View->Run +3. Select "service: semantic-workbench-service" + +### Running from the Command Line -To run and/or debug in VS Code, View->Run, "service: semantic-workbench-service" +In the [workbench-service](./) directory: -## Running from the command line +```sh +uv run start-service [--host HOST] [--port PORT] +``` -In the [workbench-service](./) directory +### Running Tests ```sh -uv run start-service +uv run pytest ``` + +## API Documentation + +When running the service, access the FastAPI auto-generated documentation at: + +- Swagger UI: `http://localhost:5000/docs` +- ReDoc: `http://localhost:5000/redoc` + +## Troubleshooting + +### Common Issues + +- **Database connection errors**: Verify your connection string and database permissions +- **Authentication failures**: Check your Azure AD configuration and client IDs +- **File storage permissions**: Ensure the service has write access to the files directory + +### Debug Mode + +Enable debug logging for more detailed information: + +```sh +WORKBENCH_SERVICE_LOG_LEVEL=DEBUG uv run start-service +``` \ No newline at end of file