WIP: Add Claude API support alongside existing GPT API #14

Draft
Copilot wants to merge 4 commits from copilot/add-claude-api-support into main
Copilot commented 2025-12-03 16:57:59 +07:00 (Migrated from github.com)

Adds Anthropic Claude models as an alternative AI provider, allowing users to choose between OpenAI and Claude models via the existing /choose_model or /set_model commands.

Changes

New Claude models available:

  • claude/claude-3-5-sonnet - Best balance of intelligence and speed
  • claude/claude-3-5-haiku - Fastest, most affordable
  • claude/claude-3-opus - Most capable for complex tasks

Configuration:

  • New ANTHROPIC_API_KEY env var in .env.example and config.py
  • Claude model pricing added to pricing.py
  • Token limits configured in MODEL_TOKEN_LIMITS

New module src/utils/claude_utils.py:

  • is_claude_model() - Detects Claude model selection
  • convert_messages_for_claude() - Converts OpenAI message format to Claude's format (extracts system prompt, handles content arrays)
  • call_claude_api() - Async wrapper for Anthropic API calls

Integration points:

  • bot.py - Initializes Claude client when API key present
  • message_handler.py - Routes requests to appropriate API based on model prefix
  • commands.py - /search and /web commands support Claude

Usage

# .env
ANTHROPIC_API_KEY=sk-ant-...

Users select Claude models the same way as OpenAI models. Note: Claude models don't currently support function calling/tools—text-only conversations.

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • https://api.github.com//advisories
    • Triggering command: /home/REDACTED/work/_temp/ghcca-node/node/bin/node /home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps /home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js (http block)
  • openaipublic.blob.core.windows.net
    • Triggering command: /usr/bin/python python -m pytest tests/ -v --tb=short (dns block)
    • Triggering command: `/usr/bin/python python -c
      from src.utils.claude_utils import is_claude_model, call_claude_api, convert_messages_for_claude
      print('claude_utils imported successfully')

from src.module.message_handler import MessageHandler
print('message_handler imported successfully')

from src.c` (dns block)

  • Triggering command: /usr/bin/python python -m pytest tests/test_bot.py::TestClaudeUtils -v (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Original prompt

I want to add claude api (so that user can use it alongside with current gpt api)


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Adds Anthropic Claude models as an alternative AI provider, allowing users to choose between OpenAI and Claude models via the existing `/choose_model` or `/set_model` commands. ## Changes **New Claude models available:** - `claude/claude-3-5-sonnet` - Best balance of intelligence and speed - `claude/claude-3-5-haiku` - Fastest, most affordable - `claude/claude-3-opus` - Most capable for complex tasks **Configuration:** - New `ANTHROPIC_API_KEY` env var in `.env.example` and `config.py` - Claude model pricing added to `pricing.py` - Token limits configured in `MODEL_TOKEN_LIMITS` **New module `src/utils/claude_utils.py`:** - `is_claude_model()` - Detects Claude model selection - `convert_messages_for_claude()` - Converts OpenAI message format to Claude's format (extracts system prompt, handles content arrays) - `call_claude_api()` - Async wrapper for Anthropic API calls **Integration points:** - `bot.py` - Initializes Claude client when API key present - `message_handler.py` - Routes requests to appropriate API based on model prefix - `commands.py` - `/search` and `/web` commands support Claude ## Usage ```bash # .env ANTHROPIC_API_KEY=sk-ant-... ``` Users select Claude models the same way as OpenAI models. Note: Claude models don't currently support function calling/tools—text-only conversations. > [!WARNING] > > <details> > <summary>Firewall rules blocked me from connecting to one or more addresses (expand for details)</summary> > > #### I tried to connect to the following addresses, but was blocked by firewall rules: > > - `https://api.github.com//advisories` > - Triggering command: `/home/REDACTED/work/_temp/ghcca-node/node/bin/node /home/REDACTED/work/_temp/ghcca-node/node/bin/node --enable-source-maps /home/REDACTED/work/_temp/copilot-developer-action-main/dist/index.js` (http block) > - `openaipublic.blob.core.windows.net` > - Triggering command: `/usr/bin/python python -m pytest tests/ -v --tb=short` (dns block) > - Triggering command: `/usr/bin/python python -c from src.utils.claude_utils import is_claude_model, call_claude_api, convert_messages_for_claude print(&#39;claude_utils imported successfully&#39;) from src.module.message_handler import MessageHandler print(&#39;message_handler imported successfully&#39;) from src.c` (dns block) > - Triggering command: `/usr/bin/python python -m pytest tests/test_bot.py::TestClaudeUtils -v` (dns block) > > If you need me to access, download, or install something from one of these locations, you can either: > > - Configure [Actions setup steps](https://gh.io/copilot/actions-setup-steps) to set up my environment, which run before the firewall is enabled > - Add the appropriate URLs or hosts to the custom allowlist in this repository's [Copilot coding agent settings](https://github.com/Coder-Vippro/ChatGPT-Discord-Bot/settings/copilot/coding_agent) (admins only) > > </details> <!-- START COPILOT CODING AGENT SUFFIX --> <details> <summary>Original prompt</summary> > I want to add claude api (so that user can use it alongside with current gpt api) </details> <!-- START COPILOT CODING AGENT TIPS --> --- 💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more [Copilot coding agent tips](https://gh.io/copilot-coding-agent-tips) in the docs.
cauvang32 (Migrated from github.com) reviewed 2025-12-03 16:57:59 +07:00
coderabbitai[bot] commented 2025-12-03 16:58:11 +07:00 (Migrated from github.com)

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Comment @coderabbitai help to get the list of available commands and usage tips.

<!-- This is an auto-generated comment: summarize by coderabbit.ai --> <!-- This is an auto-generated comment: skip review by coderabbit.ai --> > [!IMPORTANT] > ## Review skipped > > Bot user detected. > > To trigger a single review, invoke the `@coderabbitai review` command. > > You can disable this status message by setting the `reviews.review_status` to `false` in the CodeRabbit configuration file. <!-- end of auto-generated comment: skip review by coderabbit.ai --> <!-- tips_start --> --- <sub>Comment `@coderabbitai help` to get the list of available commands and usage tips.</sub> <!-- tips_end -->
copilot-pull-request-reviewer[bot] (Migrated from github.com) reviewed 2025-12-06 00:35:54 +07:00
copilot-pull-request-reviewer[bot] (Migrated from github.com) left a comment

Pull request overview

This PR adds support for Anthropic's Claude models alongside the existing OpenAI GPT models, enabling users to select Claude models through the same /choose_model or /set_model commands. The implementation creates a parallel infrastructure for Claude API calls while maintaining backward compatibility with existing OpenAI functionality.

Key changes:

  • New claude_utils.py module provides Claude-specific API handling with message format conversion
  • Claude client initialization in bot.py with graceful degradation when API key is not provided
  • Integration in message_handler.py and commands.py to route requests to Claude or OpenAI based on model selection

Reviewed changes

Copilot reviewed 8 out of 10 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
src/utils/claude_utils.py New utility module for Claude API integration with message format conversion and API calling functions
bot.py Initializes Claude client when API key is available and passes it to message handler and commands
src/module/message_handler.py Routes text messages to Claude API when Claude model is selected, with appropriate error handling
src/commands/commands.py Adds Claude support to /search and /web commands with API routing logic
src/config/config.py Adds Claude models to MODEL_OPTIONS, MODEL_TOKEN_LIMITS, and ANTHROPIC_API_KEY environment variable
src/config/pricing.py Adds pricing information for three Claude models (Sonnet, Haiku, Opus)
requirements.txt Adds anthropic>=0.39.0 dependency
.env.example Documents ANTHROPIC_API_KEY configuration option
.gitignore Improves Python cache file patterns
tests/test_bot.py Adds unit tests for Claude utility functions (is_claude_model, get_anthropic_model_name, convert_messages_for_claude)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

## Pull request overview This PR adds support for Anthropic's Claude models alongside the existing OpenAI GPT models, enabling users to select Claude models through the same `/choose_model` or `/set_model` commands. The implementation creates a parallel infrastructure for Claude API calls while maintaining backward compatibility with existing OpenAI functionality. **Key changes:** - New `claude_utils.py` module provides Claude-specific API handling with message format conversion - Claude client initialization in `bot.py` with graceful degradation when API key is not provided - Integration in `message_handler.py` and `commands.py` to route requests to Claude or OpenAI based on model selection ### Reviewed changes Copilot reviewed 8 out of 10 changed files in this pull request and generated 6 comments. <details> <summary>Show a summary per file</summary> | File | Description | | ---- | ----------- | | `src/utils/claude_utils.py` | New utility module for Claude API integration with message format conversion and API calling functions | | `bot.py` | Initializes Claude client when API key is available and passes it to message handler and commands | | `src/module/message_handler.py` | Routes text messages to Claude API when Claude model is selected, with appropriate error handling | | `src/commands/commands.py` | Adds Claude support to `/search` and `/web` commands with API routing logic | | `src/config/config.py` | Adds Claude models to MODEL_OPTIONS, MODEL_TOKEN_LIMITS, and ANTHROPIC_API_KEY environment variable | | `src/config/pricing.py` | Adds pricing information for three Claude models (Sonnet, Haiku, Opus) | | `requirements.txt` | Adds anthropic>=0.39.0 dependency | | `.env.example` | Documents ANTHROPIC_API_KEY configuration option | | `.gitignore` | Improves Python cache file patterns | | `tests/test_bot.py` | Adds unit tests for Claude utility functions (is_claude_model, get_anthropic_model_name, convert_messages_for_claude) | </details> --- 💡 <a href="/Coder-Vippro/ChatGPT-Discord-Bot/new/main/.github/instructions?filename=*.instructions.md" class="Link--inTextBlock" target="_blank" rel="noopener noreferrer">Add Copilot custom instructions</a> for smarter, more guided reviews. <a href="https://docs.github.com/en/copilot/customizing-copilot/adding-repository-custom-instructions-for-github-copilot" class="Link--inTextBlock" target="_blank" rel="noopener noreferrer">Learn how to get started</a>.
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-12-06 00:35:54 +07:00

Print statement may execute during import.

Print statement may execute during import.
@@ -15,6 +15,7 @@ import base64
import traceback
from datetime import datetime, timedelta
from src.utils.openai_utils import process_tool_calls, prepare_messages_for_api, get_tools_for_model
from src.utils.claude_utils import is_claude_model, call_claude_api, convert_messages_for_claude
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-12-06 00:35:54 +07:00

Import of 'convert_messages_for_claude' is not used.

from src.utils.claude_utils import is_claude_model, call_claude_api
Import of 'convert_messages_for_claude' is not used. ```suggestion from src.utils.claude_utils import is_claude_model, call_claude_api ```
@@ -0,0 +50,4 @@
# Extract system message
if role == 'system':
system_prompt = content if isinstance(content, str) else str(content)
continue
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-12-06 00:35:53 +07:00

If there are multiple system messages in the input, only the last one will be preserved since system_prompt is overwritten in each iteration. This could lead to loss of important system context.

Consider either:

  1. Concatenating multiple system messages: system_prompt = (system_prompt or "") + "\n" + content
  2. Logging a warning if multiple system messages are detected
If there are multiple system messages in the input, only the last one will be preserved since `system_prompt` is overwritten in each iteration. This could lead to loss of important system context. Consider either: 1. Concatenating multiple system messages: `system_prompt = (system_prompt or "") + "\n" + content` 2. Logging a warning if multiple system messages are detected
@@ -0,0 +61,4 @@
converted_messages.append({
"role": role,
"content": content
})
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-12-06 00:35:53 +07:00

When content is an empty string, an empty message is still added to converted_messages. While Claude may handle empty content gracefully, it's better to skip empty messages to avoid potential API errors.

Consider adding:

if isinstance(content, str):
    if content:  # Only add non-empty content
        converted_messages.append({
            "role": role,
            "content": content
        })
            if content:  # Only add non-empty content
                converted_messages.append({
                    "role": role,
                    "content": content
                })
When `content` is an empty string, an empty message is still added to `converted_messages`. While Claude may handle empty content gracefully, it's better to skip empty messages to avoid potential API errors. Consider adding: ```python if isinstance(content, str): if content: # Only add non-empty content converted_messages.append({ "role": role, "content": content }) ``` ```suggestion if content: # Only add non-empty content converted_messages.append({ "role": role, "content": content }) ```
@@ -0,0 +151,4 @@
api_params = {
"model": anthropic_model,
"max_tokens": max_tokens,
"messages": claude_messages,
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-12-06 00:35:52 +07:00

If the input messages contain only system messages, tool calls, or function calls, claude_messages will be empty after filtering. The Claude API requires at least one message in the messages array, which would cause an API error.

Consider adding validation:

if not claude_messages:
    logging.warning("No valid messages for Claude after conversion")
    # Either return an error or add a default message
If the input messages contain only system messages, tool calls, or function calls, `claude_messages` will be empty after filtering. The Claude API requires at least one message in the messages array, which would cause an API error. Consider adding validation: ```python if not claude_messages: logging.warning("No valid messages for Claude after conversion") # Either return an error or add a default message ```
@@ -378,0 +432,4 @@
self.assertIsNone(system_prompt)
self.assertEqual(len(converted), 2)
copilot-pull-request-reviewer[bot] (Migrated from github.com) commented 2025-12-06 00:35:53 +07:00

The call_claude_api function, which is a critical integration point with the Claude API, lacks test coverage. Consider adding tests for:

  • Successful API calls with various message formats
  • Error handling scenarios (network errors, API errors, rate limits)
  • Token usage tracking
  • Response parsing

Example test structure:

async def test_call_claude_api_success(self):
    from src.utils.claude_utils import call_claude_api
    
    # Mock the Anthropic client
    mock_client = Mock()
    mock_response = Mock()
    mock_response.content = [Mock(text="Test response")]
    mock_response.usage = Mock(input_tokens=10, output_tokens=20)
    mock_client.messages.create = AsyncMock(return_value=mock_response)
    
    # Test the call
    result = await call_claude_api(mock_client, messages, "claude/claude-3-5-sonnet")
    self.assertTrue(result["success"])
    self.assertEqual(result["content"], "Test response")
The `call_claude_api` function, which is a critical integration point with the Claude API, lacks test coverage. Consider adding tests for: - Successful API calls with various message formats - Error handling scenarios (network errors, API errors, rate limits) - Token usage tracking - Response parsing Example test structure: ```python async def test_call_claude_api_success(self): from src.utils.claude_utils import call_claude_api # Mock the Anthropic client mock_client = Mock() mock_response = Mock() mock_response.content = [Mock(text="Test response")] mock_response.usage = Mock(input_tokens=10, output_tokens=20) mock_client.messages.create = AsyncMock(return_value=mock_response) # Test the call result = await call_claude_api(mock_client, messages, "claude/claude-3-5-sonnet") self.assertTrue(result["success"]) self.assertEqual(result["content"], "Test response") ```
This pull request is marked as a work in progress.
View command line instructions

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u origin copilot/add-claude-api-support:copilot/add-claude-api-support
git checkout copilot/add-claude-api-support
Sign in to join this conversation.