Skip to content

initial thinking support for claude #15092

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Mar 27, 2025
4 changes: 1 addition & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,7 @@
- [core] migration from deprecated `phosphorJs` to actively maintained fork `Lumino` [#14320](https://github.com/eclipse-theia/theia/pull/14320) - Contributed on behalf of STMicroelectronics
Adopters importing `@phosphor` packages now need to import from `@lumino`. CSS selectors refering to `.p-` classes now need to refer to `.lm-` classes. There are also minor code adaptations, for example now using `iconClass` instead of `icon` in Lumino commands.
- [core] typing of `addKeyListener` and `Widget.addKeyListener` corrected to reflect events for `additionalEventTypes`. Adopters declaring handlers explicitly expecting `KeyboardEvent` together with `additionalEventTypes` may need to update type declarations. [#15210]

<a name="breaking_changes_1.60.0">[Breaking Changes:](#breaking_changes_1.60.0)</a>

- [ai] the format for `ai-features.modelSettings.requestSettings` settings has changed. Furthermore the request object for LLMs slightly changed as the message types where improved. [#15092]
- [ai-chat] `ParsedChatRequest.variables` is now `ResolvedAIVariable[]` instead of a `Map<string, AIVariable>` [#15196](https://github.com/eclipse-theia/theia/pull/15196)
- [ai-chat] `ChatRequestParser.parseChatRequest` is now asynchronous and expects an additional `ChatContext` parameter [#15196](https://github.com/eclipse-theia/theia/pull/15196)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,13 @@
import {
AbstractStreamParsingChatAgent,
ChatAgent,
ChatMessage,
ChatModel,
MutableChatRequestModel,
lastProgressMessage,
QuestionResponseContentImpl,
unansweredQuestions
} from '@theia/ai-chat';
import { Agent, PromptTemplate } from '@theia/ai-core';
import { Agent, LanguageModelMessage, PromptTemplate } from '@theia/ai-core';
import { injectable, interfaces, postConstruct } from '@theia/core/shared/inversify';

export function bindAskAndContinueChatAgentContribution(bind: interfaces.Bind): void {
Expand Down Expand Up @@ -161,15 +160,15 @@ export class AskAndContinueChatAgent extends AbstractStreamParsingChatAgent {
* As the question/answer are handled within the same response, we add an additional user message at the end to indicate to
* the LLM to continue generating.
*/
protected override async getMessages(model: ChatModel): Promise<ChatMessage[]> {
protected override async getMessages(model: ChatModel): Promise<LanguageModelMessage[]> {
const messages = await super.getMessages(model, true);
const requests = model.getRequests();
if (!requests[requests.length - 1].response.isComplete && requests[requests.length - 1].response.response?.content.length > 0) {
return [...messages,
{
type: 'text',
actor: 'user',
query: 'Continue generating based on the user\'s answer or finish the conversation if 5 or more questions were already answered.'
text: 'Continue generating based on the user\'s answer or finish the conversation if 5 or more questions were already answered.'
}];
}
return messages;
Expand Down
32 changes: 16 additions & 16 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion packages/ai-anthropic/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"version": "1.59.0",
"description": "Theia - Anthropic Integration",
"dependencies": {
"@anthropic-ai/sdk": "^0.32.1",
"@anthropic-ai/sdk": "^0.39.0",
"@theia/ai-core": "1.59.0",
"@theia/core": "1.59.0"
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@ import { FrontendApplicationContribution, PreferenceService } from '@theia/core/
import { inject, injectable } from '@theia/core/shared/inversify';
import { AnthropicLanguageModelsManager, AnthropicModelDescription } from '../common';
import { API_KEY_PREF, MODELS_PREF } from './anthropic-preferences';
import { PREFERENCE_NAME_REQUEST_SETTINGS, RequestSetting } from '@theia/ai-core/lib/browser/ai-core-preferences';

const ANTHROPIC_PROVIDER_ID = 'anthropic';

Expand Down Expand Up @@ -47,17 +46,14 @@ export class AnthropicFrontendApplicationContribution implements FrontendApplica
this.manager.setApiKey(apiKey);

const models = this.preferenceService.get<string[]>(MODELS_PREF, []);
const requestSettings = this.getRequestSettingsPref();
this.manager.createOrUpdateLanguageModels(...models.map(modelId => this.createAnthropicModelDescription(modelId, requestSettings)));
this.manager.createOrUpdateLanguageModels(...models.map(modelId => this.createAnthropicModelDescription(modelId)));
this.prevModels = [...models];

this.preferenceService.onPreferenceChanged(event => {
if (event.preferenceName === API_KEY_PREF) {
this.manager.setApiKey(event.newValue);
} else if (event.preferenceName === MODELS_PREF) {
this.handleModelChanges(event.newValue as string[]);
} else if (event.preferenceName === PREFERENCE_NAME_REQUEST_SETTINGS) {
this.handleRequestSettingsChanges(event.newValue as RequestSetting[]);
}
});
});
Expand All @@ -71,31 +67,19 @@ export class AnthropicFrontendApplicationContribution implements FrontendApplica
const modelsToAdd = [...updatedModels].filter(model => !oldModels.has(model));

this.manager.removeLanguageModels(...modelsToRemove.map(model => `${ANTHROPIC_PROVIDER_ID}/${model}`));
const requestSettings = this.getRequestSettingsPref();
this.manager.createOrUpdateLanguageModels(...modelsToAdd.map(modelId => this.createAnthropicModelDescription(modelId, requestSettings)));
this.manager.createOrUpdateLanguageModels(...modelsToAdd.map(modelId => this.createAnthropicModelDescription(modelId)));
this.prevModels = newModels;
}

private getRequestSettingsPref(): RequestSetting[] {
return this.preferenceService.get<RequestSetting[]>(PREFERENCE_NAME_REQUEST_SETTINGS, []);
}

protected handleRequestSettingsChanges(newSettings: RequestSetting[]): void {
const models = this.preferenceService.get<string[]>(MODELS_PREF, []);
this.manager.createOrUpdateLanguageModels(...models.map(modelId => this.createAnthropicModelDescription(modelId, newSettings)));
}

protected createAnthropicModelDescription(modelId: string, requestSettings: RequestSetting[]): AnthropicModelDescription {
protected createAnthropicModelDescription(modelId: string): AnthropicModelDescription {
const id = `${ANTHROPIC_PROVIDER_ID}/${modelId}`;
const modelRequestSetting = this.getMatchingRequestSetting(modelId, ANTHROPIC_PROVIDER_ID, requestSettings);
const maxTokens = DEFAULT_MODEL_MAX_TOKENS[modelId];

const description: AnthropicModelDescription = {
id: id,
model: modelId,
apiKey: true,
enableStreaming: true,
defaultRequestSettings: modelRequestSetting?.requestSettings
enableStreaming: true
};

if (maxTokens !== undefined) {
Expand All @@ -104,20 +88,4 @@ export class AnthropicFrontendApplicationContribution implements FrontendApplica

return description;
}

protected getMatchingRequestSetting(
modelId: string,
providerId: string,
requestSettings: RequestSetting[]
): RequestSetting | undefined {
const matchingSettings = requestSettings.filter(
setting => (!setting.providerId || setting.providerId === providerId) && setting.modelId === modelId
);
if (matchingSettings.length > 1) {
console.warn(
`Multiple entries found for provider "${providerId}" and model "${modelId}". Using the first match.`
);
}
return matchingSettings[0];
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,7 @@ export interface AnthropicModelDescription {
* Maximum number of tokens to generate. Default is 4096.
*/
maxTokens?: number;
/**
* Default request settings for the Anthropic model.
*/
defaultRequestSettings?: { [key: string]: unknown };

}
export interface AnthropicLanguageModelsManager {
apiKey: string | undefined;
Expand Down
64 changes: 42 additions & 22 deletions packages/ai-anthropic/src/node/anthropic-language-model.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,15 +17,15 @@
import {
LanguageModel,
LanguageModelRequest,
LanguageModelRequestMessage,
LanguageModelMessage,
LanguageModelResponse,
LanguageModelStreamResponse,
LanguageModelStreamResponsePart,
LanguageModelTextResponse
} from '@theia/ai-core';
import { CancellationToken, isArray } from '@theia/core';
import { Anthropic } from '@anthropic-ai/sdk';
import { MessageParam } from '@anthropic-ai/sdk/resources';
import { Message, MessageParam } from '@anthropic-ai/sdk/resources';

export const DEFAULT_MAX_TOKENS = 4096;
const EMPTY_INPUT_SCHEMA = {
Expand All @@ -41,23 +41,36 @@ interface ToolCallback {
args: string;
}

const createMessageContent = (message: LanguageModelMessage): MessageParam['content'] => {
if (LanguageModelMessage.isTextMessage(message)) {
return message.text;
} else if (LanguageModelMessage.isThinkingMessage(message)) {
return [{ signature: message.signature, thinking: message.thinking, type: 'thinking' }];
} else if (LanguageModelMessage.isToolUseMessage(message)) {
return [{ id: message.id, input: message.input, name: message.name, type: 'tool_use' }];
} else if (LanguageModelMessage.isToolResultMessage(message)) {
return [{ type: 'tool_result', tool_use_id: message.tool_use_id }];
}
throw new Error(`Unknown message type:'${JSON.stringify(message)}'`);
};

/**
* Transforms Theia language model messages to Anthropic API format
* @param messages Array of LanguageModelRequestMessage to transform
* @returns Object containing transformed messages and optional system message
*/
function transformToAnthropicParams(
messages: readonly LanguageModelRequestMessage[]
messages: readonly LanguageModelMessage[]
): { messages: MessageParam[]; systemMessage?: string } {
// Extract the system message (if any), as it is a separate parameter in the Anthropic API.
const systemMessageObj = messages.find(message => message.actor === 'system');
const systemMessage = systemMessageObj?.query;
const systemMessage = systemMessageObj && LanguageModelMessage.isTextMessage(systemMessageObj) && systemMessageObj.text || undefined;

const convertedMessages = messages
.filter(message => message.actor !== 'system')
.map(message => ({
role: toAnthropicRole(message),
content: message.query || '',
content: createMessageContent(message)
}));

return {
Expand All @@ -73,7 +86,7 @@ export const AnthropicModelIdentifier = Symbol('AnthropicModelIdentifier');
* @param message The message to convert
* @returns Anthropic role ('user' or 'assistant')
*/
function toAnthropicRole(message: LanguageModelRequestMessage): 'user' | 'assistant' {
function toAnthropicRole(message: LanguageModelMessage): 'user' | 'assistant' {
switch (message.actor) {
case 'ai':
return 'assistant';
Expand All @@ -92,12 +105,11 @@ export class AnthropicModel implements LanguageModel {
public model: string,
public enableStreaming: boolean,
public apiKey: () => string | undefined,
public defaultRequestSettings?: Readonly<Record<string, unknown>>,
public maxTokens: number = DEFAULT_MAX_TOKENS
) { }

protected getSettings(request: LanguageModelRequest): Readonly<Record<string, unknown>> {
return request.settings ?? this.defaultRequestSettings ?? {};
return request.settings ?? {};
}

async request(request: LanguageModelRequest, cancellationToken?: CancellationToken): Promise<LanguageModelResponse> {
Expand Down Expand Up @@ -148,11 +160,11 @@ export class AnthropicModel implements LanguageModel {
max_tokens: this.maxTokens,
messages: [...messages, ...(toolMessages ?? [])],
tools,
tool_choice: { type: 'auto' },
model: this.model,
...(systemMessage && { system: systemMessage }),
...settings
};

const stream = anthropic.messages.stream(params);

cancellationToken?.onCancellationRequested(() => {
Expand All @@ -165,11 +177,15 @@ export class AnthropicModel implements LanguageModel {

const toolCalls: ToolCallback[] = [];
let toolCall: ToolCallback | undefined;
const currentMessages: Message[] = [];

for await (const event of stream) {
if (event.type === 'content_block_start') {
const contentBlock = event.content_block;

if (contentBlock.type === 'thinking') {
yield { thought: contentBlock.thinking, signature: contentBlock.signature ?? '' };
}
if (contentBlock.type === 'text') {
yield { content: contentBlock.text };
}
Expand All @@ -179,7 +195,12 @@ export class AnthropicModel implements LanguageModel {
}
} else if (event.type === 'content_block_delta') {
const delta = event.delta;

if (delta.type === 'thinking_delta') {
yield { thought: delta.thinking, signature: '' };
}
if (delta.type === 'signature_delta') {
yield { thought: '', signature: delta.signature };
}
if (delta.type === 'text_delta') {
yield { content: delta.text };
}
Expand All @@ -199,6 +220,8 @@ export class AnthropicModel implements LanguageModel {
}
throw new Error(`The response was stopped because it exceeded the max token limit of ${event.usage.output_tokens}.`);
}
} else if (event.type === 'message_start') {
currentMessages.push(event.message);
}
}
if (toolCalls.length > 0) {
Expand All @@ -216,17 +239,6 @@ export class AnthropicModel implements LanguageModel {
});
yield { tool_calls: calls };

const toolRequestMessage: Anthropic.Messages.MessageParam = {
role: 'assistant',
content: toolResult.map(call => ({

type: 'tool_use',
id: call.id,
name: call.name,
input: JSON.parse(call.arguments)
}))
};

const toolResponseMessage: Anthropic.Messages.MessageParam = {
role: 'user',
content: toolResult.map(call => ({
Expand All @@ -235,7 +247,15 @@ export class AnthropicModel implements LanguageModel {
content: that.formatToolCallResult(call.result)
}))
};
const result = await that.handleStreamingRequest(anthropic, request, cancellationToken, [...(toolMessages ?? []), toolRequestMessage, toolResponseMessage]);
const result = await that.handleStreamingRequest(
anthropic,
request,
cancellationToken,
[
...(toolMessages ?? []),
...currentMessages.map(m => ({ role: m.role, content: m.content })),
toolResponseMessage
]);
for await (const nestedEvent of result.stream) {
yield nestedEvent;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@ export class AnthropicLanguageModelsManagerImpl implements AnthropicLanguageMode
model.model = modelDescription.model;
model.enableStreaming = modelDescription.enableStreaming;
model.apiKey = apiKeyProvider;
model.defaultRequestSettings = modelDescription.defaultRequestSettings;
if (modelDescription.maxTokens !== undefined) {
model.maxTokens = modelDescription.maxTokens;
} else {
Expand All @@ -65,7 +64,6 @@ export class AnthropicLanguageModelsManagerImpl implements AnthropicLanguageMode
modelDescription.model,
modelDescription.enableStreaming,
apiKeyProvider,
modelDescription.defaultRequestSettings,
modelDescription.maxTokens
)
]);
Expand Down
Loading
Loading