gitdataai/libs/agent/chat/mod.rs
ZhenYi 3de4fff11d
Some checks are pending
CI / Rust Lint & Check (push) Waiting to run
CI / Rust Tests (push) Waiting to run
CI / Frontend Lint & Type Check (push) Waiting to run
CI / Frontend Build (push) Blocked by required conditions
feat(service): improve model sync and harden git HTTP/SSH stability
Model sync:
- Filter OpenRouter models by what the user's AI client can actually access,
  before upserting metadata (avoids bloating with inaccessible models).
- Fall back to direct endpoint sync when no OpenRouter metadata matches
  (handles Bailian/MiniMax and other non-OpenRouter providers).

Git stability fixes:
- SSH: add 5s timeout on stdin flush/shutdown in channel_eof and
  cleanup_channel to prevent blocking the event loop on unresponsive git.
- SSH: remove dbg!() calls from production code paths.
- HTTP auth: pass proper Logger to SshAuthService instead of discarding
  all auth events to slog::Discard.

Dependencies:
- reqwest: add native-tls feature for HTTPS on Windows/Linux/macOS.
2026-04-17 00:13:40 +08:00

59 lines
1.5 KiB
Rust

use std::pin::Pin;
use async_openai::types::chat::ChatCompletionTool;
use db::cache::AppCache;
use db::database::AppDatabase;
use models::agents::model;
use models::projects::project;
use models::repos::repo;
use models::rooms::{room, room_message};
use models::users::user;
use std::collections::HashMap;
use uuid::Uuid;
/// Maximum recursion rounds for tool-call loops (AI → tool → result → AI).
pub const DEFAULT_MAX_TOOL_DEPTH: usize = 3;
/// A single chunk from an AI streaming response.
#[derive(Debug, Clone)]
pub struct AiStreamChunk {
pub content: String,
pub done: bool,
}
/// Optional streaming callback: called for each token chunk.
pub type StreamCallback = Box<
dyn Fn(AiStreamChunk) -> Pin<Box<dyn std::future::Future<Output = ()> + Send>> + Send + Sync,
>;
pub struct AiRequest {
pub db: AppDatabase,
pub cache: AppCache,
pub model: model::Model,
pub project: project::Model,
pub sender: user::Model,
pub room: room::Model,
pub input: String,
pub mention: Vec<Mention>,
pub history: Vec<room_message::Model>,
pub user_names: HashMap<Uuid, String>,
pub temperature: f64,
pub max_tokens: i32,
pub top_p: f64,
pub frequency_penalty: f64,
pub presence_penalty: f64,
pub think: bool,
pub tools: Option<Vec<ChatCompletionTool>>,
pub max_tool_depth: usize,
}
pub enum Mention {
User(user::Model),
Repo(repo::Model),
}
pub mod context;
pub mod service;
pub use context::{AiContextSenderType, RoomMessageContext};
pub use service::ChatService;