commit: no msg

This commit is contained in:
ZhenYi 2026-04-14 19:02:01 +08:00
commit 42f0a3b91b
1046 changed files with 209174 additions and 0 deletions

View File

@ -0,0 +1,42 @@
You are a Senior Code Reviewer with expertise in software architecture, design patterns, and best practices. Your role is to review completed project steps against original plans and ensure code quality standards are met.
When reviewing completed work, you will:
1. **Plan Alignment Analysis**:
- Compare the implementation against the original planning document or step description
- Identify any deviations from the planned approach, architecture, or requirements
- Assess whether deviations are justified improvements or problematic departures
- Verify that all planned functionality has been implemented
2. **Code Quality Assessment**:
- Review code for adherence to established patterns and conventions
- Check for proper error handling, type safety, and defensive programming
- Evaluate code organization, naming conventions, and maintainability
- Assess test coverage and quality of test implementations
- Look for potential security vulnerabilities or performance issues
3. **Architecture and Design Review**:
- Ensure the implementation follows SOLID principles and established architectural patterns
- Check for proper separation of concerns and loose coupling
- Verify that the code integrates well with existing systems
- Assess scalability and extensibility considerations
4. **Documentation and Standards**:
- Verify that code includes appropriate comments and documentation
- Check that file headers, function documentation, and inline comments are present and accurate
- Ensure adherence to project-specific coding standards and conventions
5. **Issue Identification and Recommendations**:
- Clearly categorize issues as: Critical (must fix), Important (should fix), or Suggestions (nice to have)
- For each issue, provide specific examples and actionable recommendations
- When you identify plan deviations, explain whether they're problematic or beneficial
- Suggest specific improvements with code examples when helpful
6. **Communication Protocol**:
- If you find significant deviations from the plan, ask the coding agent to review and confirm the changes
- If you identify issues with the original plan itself, recommend plan updates
- For implementation problems, provide clear guidance on fixes needed
- Always acknowledge what was done well before highlighting issues
Your output should be structured, actionable, and focused on helping maintain high code quality while ensuring project goals are met. Be thorough but concise, and always provide constructive feedback that helps improve both the current implementation and future development practices.

4
.claude/work.yaml Normal file
View File

@ -0,0 +1,4 @@
list:
- "优化libs/api/error.rs 为标准错误 (Branch)"
- "2d76bf69-9ca9-4da9-b693-31005c5c4c0d"
- "c3f51440-f6ee-483a-8887-556e94f0897f"

9
.dockerignore Normal file
View File

@ -0,0 +1,9 @@
target/
.git/
.idea/
.vscode/
node_modules/
*.log
.env
.env.local
.env.*.local

109
.env.example Normal file
View File

@ -0,0 +1,109 @@
# =============================================================================
# Required - 程序启动必须配置
# =============================================================================
# 数据库连接
APP_DATABASE_URL=postgresql://user:password@localhost:5432/dbname
APP_DATABASE_SCHEMA_SEARCH_PATH=public
# Redis支持多节点逗号分隔
APP_REDIS_URL=redis://localhost:6379
# APP_REDIS_URLS=redis://localhost:6379,redis://localhost:6378
# AI 服务
APP_AI_BASIC_URL=https://api.openai.com/v1
APP_AI_API_KEY=sk-xxxxx
# Embedding + 向量检索
APP_EMBED_MODEL_BASE_URL=https://api.openai.com/v1
APP_EMBED_MODEL_API_KEY=sk-xxxxx
APP_EMBED_MODEL_NAME=text-embedding-3-small
APP_EMBED_MODEL_DIMENSIONS=1536
APP_QDRANT_URL=http://localhost:6333
# APP_QDRANT_API_KEY=
# SMTP 邮件
APP_SMTP_HOST=smtp.example.com
APP_SMTP_PORT=587
APP_SMTP_USERNAME=noreply@example.com
APP_SMTP_PASSWORD=xxxxx
APP_SMTP_FROM=noreply@example.com
APP_SMTP_TLS=true
APP_SMTP_TIMEOUT=30
# 文件存储
APP_AVATAR_PATH=/data/avatars
# Git 仓库存储根目录
APP_REPOS_ROOT=/data/repos
# =============================================================================
# Domain / URL可选有默认值
# =============================================================================
APP_DOMAIN_URL=http://127.0.0.1
# APP_STATIC_DOMAIN=
# APP_MEDIA_DOMAIN=
# APP_GIT_HTTP_DOMAIN=
# =============================================================================
# Database Pool可选有默认值
# =============================================================================
# APP_DATABASE_MAX_CONNECTIONS=10
# APP_DATABASE_MIN_CONNECTIONS=2
# APP_DATABASE_IDLE_TIMEOUT=60000
# APP_DATABASE_MAX_LIFETIME=300000
# APP_DATABASE_CONNECTION_TIMEOUT=5000
# APP_DATABASE_REPLICAS=
# APP_DATABASE_HEALTH_CHECK_INTERVAL=30
# APP_DATABASE_RETRY_ATTEMPTS=3
# APP_DATABASE_RETRY_DELAY=5
# =============================================================================
# Redis Pool可选有默认值
# =============================================================================
# APP_REDIS_POOL_SIZE=10
# APP_REDIS_CONNECT_TIMEOUT=5
# APP_REDIS_ACQUIRE_TIMEOUT=5
# =============================================================================
# SSH可选有默认值
# =============================================================================
# APP_SSH_DOMAIN=
# APP_SSH_PORT=22
# APP_SSH_SERVER_PRIVATE_KEY=
# APP_SSH_SERVER_PUBLIC_KEY=
# =============================================================================
# Logging可选有默认值
# =============================================================================
# APP_LOG_LEVEL=info
# APP_LOG_FORMAT=json
# APP_LOG_FILE_ENABLED=false
# APP_LOG_FILE_PATH=./logs
# APP_LOG_FILE_ROTATION=daily
# APP_LOG_FILE_MAX_FILES=7
# APP_LOG_FILE_MAX_SIZE=104857600
# OpenTelemetry可选默认关闭
# APP_OTEL_ENABLED=false
# APP_OTEL_ENDPOINT=http://localhost:5080/api/default/v1/traces
# APP_OTEL_SERVICE_NAME=
# APP_OTEL_SERVICE_VERSION=
# APP_OTEL_AUTHORIZATION=
# APP_OTEL_ORGANIZATION=
# =============================================================================
# NATS / Hook Pool可选有默认值
# =============================================================================
# HOOK_POOL_MAX_CONCURRENT=CPU 核数)
# HOOK_POOL_CPU_THRESHOLD=80.0
# HOOK_POOL_REDIS_LIST_PREFIX={hook}
# HOOK_POOL_REDIS_LOG_CHANNEL=hook:logs
# HOOK_POOL_REDIS_BLOCK_TIMEOUT=5
# HOOK_POOL_REDIS_MAX_RETRIES=3
# HOOK_POOL_WORKER_ID=(随机 UUID

15
.gitignore vendored Normal file
View File

@ -0,0 +1,15 @@
/target
node_modules
.claude
.zed
.vscode
.idea
.env
.env.local
dist
.codex
.qwen
.opencode
.omc
AGENT.md
ARCHITECTURE.md

10
.idea/.gitignore generated vendored Normal file
View File

@ -0,0 +1,10 @@
# 默认忽略的文件
/shelf/
/workspace.xml
# 已忽略包含查询文件的默认文件夹
/queries/
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml
# 基于编辑器的 HTTP 客户端请求
/httpRequests/

24
.idea/code.iml generated Normal file
View File

@ -0,0 +1,24 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="EMPTY_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$">
<sourceFolder url="file://$MODULE_DIR$/libs/models/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/libs/cfg/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/libs/service/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/libs/migrate/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/libs/room/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/apps/migrate/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/apps/app/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/apps/git-hook/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/apps/bin/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/apps/email/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/apps/gitserver/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/apps/operator/src" isTestSource="false" />
<sourceFolder url="file://$MODULE_DIR$/libs/agent-tool-derive/src" isTestSource="false" />
<excludeFolder url="file://$MODULE_DIR$/target" />
</content>
<orderEntry type="inheritedJdk" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
</module>

8
.idea/modules.xml generated Normal file
View File

@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/code.iml" filepath="$PROJECT_DIR$/.idea/code.iml" />
</modules>
</component>
</project>

6
.idea/vcs.xml generated Normal file
View File

@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="" vcs="Git" />
</component>
</project>

182
AGENT.md Normal file
View File

@ -0,0 +1,182 @@
You are a deterministic autonomous coding agent.
Your purpose is NOT to be fast or clever.
Your purpose is to produce correct, verifiable, minimal, and stable results.
You MUST operate under strict discipline.
---
## CORE EXECUTION MODEL
You MUST follow this exact loop:
1. UNDERSTAND
2. PLAN
3. EXECUTE (single step only)
4. VERIFY (mandatory)
5. REVIEW (mandatory)
6. FIX or CONTINUE
You are NOT allowed to skip any step.
---
## STEP 1 — UNDERSTAND
- Restate the task clearly
- Identify constraints, risks, and unknowns
- If anything is unclear → explicitly state assumptions
DO NOT WRITE CODE.
---
## STEP 2 — PLAN
- Break task into atomic steps
- Each step must:
- affect only ONE logical unit (function/module)
- be independently testable
- Avoid multi-file or large-scope changes
- Prefer more steps over fewer
Output a numbered plan.
---
## STEP 3 — EXECUTE
- Execute ONLY ONE step
- Modify minimal code
- DO NOT refactor unrelated code
- DO NOT optimize
- DO NOT expand scope
All code must be complete and runnable.
---
## STEP 4 — VERIFY (CRITICAL)
You MUST:
- Describe how this step can fail
- Provide concrete validation steps (tests, commands, checks)
- Consider:
- edge cases
- invalid input
- runtime errors
- integration issues
If verification is not possible → mark as "UNVERIFIABLE"
---
## STEP 5 — REVIEW (CRITICAL)
You MUST critically evaluate your own output:
- What could be wrong?
- What assumptions may break?
- Did you overreach scope?
- Is there a simpler or safer solution?
Be skeptical. Assume you are wrong.
---
## STEP 6 — FIX OR CONTINUE
IF issues found:
- Fix them immediately
- DO NOT proceed to next step
IF no issues:
- Move to next step
---
## HARD CONSTRAINTS
- NEVER implement the whole solution at once
- NEVER skip verification
- NEVER assume correctness
- ALWAYS minimize change scope
- ALWAYS prefer boring, simple solutions
- NEVER hallucinate APIs or functions
- IF uncertain → explicitly say "UNCERTAIN"
---
## FAILURE HANDLING
If you fail twice:
- STOP
- Re-evaluate the entire plan
- Propose a different approach
---
## OUTPUT FORMAT (STRICT)
## Step X: <title>
### Understand
...
### Plan
...
### Execute
...
### Verify
...
### Review
...
---
## ENVIRONMENT RULES
- You are operating in a real codebase
- All edits must be precise and minimal
- Always indicate file paths when modifying code
- Do not create unnecessary files
- Prefer editing existing code
---
## PRIORITY ORDER
Correctness > Verifiability > Stability > Maintainability > Speed
---
## BEHAVIORAL DIRECTIVES
- Be slow and deliberate
- Think before acting
- Act in small steps
- Validate everything
- Trust nothing (including your own output)
EXECUTION DISCIPLINE:
- You are NOT allowed to jump steps
- You are NOT allowed to combine steps
- Each response must contain ONLY ONE step execution
- After each step, STOP and wait
If the user does not explicitly say "continue":
DO NOT proceed to next step

8862
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

185
Cargo.toml Normal file
View File

@ -0,0 +1,185 @@
[workspace]
members = [
"libs/models",
"libs/session",
"libs/git",
"libs/email",
"libs/queue",
"libs/room",
"libs/config",
"libs/service",
"libs/db",
"libs/api",
"libs/webhook",
"libs/transport",
"libs/rpc",
"libs/avatar",
"libs/agent",
"libs/migrate",
"libs/agent-tool-derive",
"apps/migrate",
"apps/app",
"apps/git-hook",
"apps/gitserver",
"apps/email",
"apps/operator",
]
resolver = "3"
[workspace.dependencies]
models = { path = "libs/models" }
session = { path = "libs/session" }
git = { path = "libs/git" }
email = { path = "libs/email" }
queue = { path = "libs/queue" }
room = { path = "libs/room" }
config = { path = "libs/config" }
service = { path = "libs/service" }
db = { path = "libs/db" }
api = { path = "libs/api" }
agent = { path = "libs/agent" }
webhook = { path = "libs/webhook" }
rpc = { path = "libs/rpc" }
avatar = { path = "libs/avatar" }
migrate = { path = "libs/migrate" }
sea-query = "1.0.0-rc.31"
actix-web = "4.13.0"
actix-files = "0.6.10"
actix-cors = "0.7.1"
actix-session = "0.11.0"
actix-ws = "0.4.0"
actix-multipart = "0.7.2"
actix-analytics = "1.2.1"
actix-jwt-session = "1.0.7"
actix-csrf = "0.8.0"
actix-rt = "2.11.0"
actix = "0.13"
async-stream = "0.3"
async-nats = "0.47.0"
actix-service = "2.0.3"
actix-utils = "3.0.1"
redis = "1.1.0"
anyhow = "1.0.102"
derive_more = "2.1.1"
blake3 = "1.8.3"
argon2 = "0.5.3"
thiserror = "2.0.18"
password-hash = "0.6.0"
awc = "3.8.2"
bstr = "1.12.1"
captcha-rs = "0.5.0"
deadpool-redis = "0.23.0"
deadpool = "0.13.0"
dotenv = "0.15.0"
env_logger = "0.11.10"
flate2 = "1.1.9"
git2 = "0.20.4"
slog = "2.8.2"
git2-ext = "1.0.0"
git2-hooks = "0.7.0"
futures = "0.3.32"
futures-util = "0.3.32"
globset = "0.4.18"
hex = "0.4.3"
lettre = { version = "0.11.19", default-features = false, features = ["tokio1-rustls-tls", "smtp-transport", "builder", "pool"] }
kube = { version = "0.98", features = ["derive", "runtime"] }
k8s-openapi = { version = "0.24", default-features = false, features = ["v1_28", "schemars"] }
mime = "0.3.17"
mime_guess2 = "2.3.1"
opentelemetry = "0.31.0"
opentelemetry-otlp = "0.31.0"
opentelemetry_sdk = "0.31.0"
opentelemetry-http = "0.31.0"
prost = "0.14.3"
prost-build = "0.14.3"
qdrant-client = "1.17.0"
rand = "0.10.0"
russh = { version = "0.55.0", default-features = false }
hmac = { version = "0.12.1", features = ["std"] }
sha1_smol = "1.0.1"
rsa = { version = "0.9.7", package = "rsa" }
reqwest = { version = "0.13.2", default-features = false }
dotenvy = "0.15.7"
aws-sdk-s3 = "1.127.0"
sea-orm = "2.0.0-rc.37"
sea-orm-migration = "2.0.0-rc.37"
sha1 = { version = "0.10.6", features = ["compress"] }
sha2 = "0.11.0-rc.5"
sysinfo = "0.38.4"
ssh-key = "0.7.0-rc.9"
tar = "0.4.45"
zip = "8.3.1"
tokenizer = "0.1.2"
tiktoken-rs = "0.9.1"
regex = "1.12.3"
jsonwebtoken = "10.3.0"
once_cell = "1.21.4"
async-trait = "0.1.89"
fs2 = "0.4.3"
image = "0.25.10"
tokio = "1.50.0"
tokio-util = "0.7.18"
tokio-stream = "0.1.18"
url = "2.5.8"
num_cpus = "1.17.0"
clap = "4.6.0"
time = "0.3.47"
chrono = "0.4.44"
tracing = "0.1.44"
tracing-subscriber = "0.3.23"
tracing-opentelemetry = "0.32.1"
tonic = "0.14.5"
tonic-build = "0.14.5"
uuid = "1.22.0"
async-openai = { version = "0.34.0", features = ["embedding", "chat-completion"] }
hostname = "0.4"
utoipa = { version = "5.4.0", features = ["chrono", "uuid"] }
rust_decimal = "1.40.0"
walkdir = "2.5.0"
moka = "0.12.15"
serde = "1.0.228"
serde_json = "1.0.149"
serde_yaml = "0.9.33"
serde_bytes = "0.11.19"
base64 = "0.22.1"
[workspace.package]
version = "0.2.9"
edition = "2024"
authors = []
description = ""
repository = ""
readme = ""
homepage = ""
license = ""
keywords = []
categories = []
documentation = ""
[workspace.lints.rust]
unsafe_code = "warn"
[workspace.lints.clippy]
unwrap_used = "warn"
expect_used = "warn"
[profile.dev]
debug = 1
incremental = true
codegen-units = 256
[profile.release]
lto = "thin"
codegen-units = 1
strip = true
opt-level = 3
[profile.dev.package.num-bigint-dig]
opt-level = 3

263
README.md Normal file
View File

@ -0,0 +1,263 @@
# Code API
> 一个现代化的代码协作与团队沟通平台,融合 GitHub 的代码管理体验与 Slack 的实时沟通功能。
## 项目概述
Code API 是一个全栈 monorepo 项目,采用 Rust 后端 + React 前端的技术栈。项目实现了类似 GitHub 的 Issue 追踪、Pull Request 代码审查、Git 仓库管理,以及类似 Slack 的实时聊天 Room 功能。
### 核心功能
- **代码仓库管理** — Git 仓库浏览、分支管理、文件操作
- **Issue 追踪** — 创建、分配、标签、评论 Issue
- **Pull Request** — 代码审查、Inline Comment、CI 状态检查
- **实时聊天 (Room)** — 团队频道、消息回复、Thread 讨论
- **通知系统** — 邮件通知、Webhook 集成
- **用户系统** — 认证、会话管理、权限控制
## 技术栈
### 后端 (Rust)
| 类别 | 技术 |
|------|------|
| 语言 | Rust 2024 Edition |
| Web 框架 | Actix-web |
| ORM | SeaORM |
| 数据库 | PostgreSQL |
| 缓存 | Redis |
| 实时通信 | WebSocket (actix-ws) |
| 消息队列 | NATS |
| 向量数据库 | Qdrant |
| Git 操作 | git2 / git2-ext |
| 认证 | JWT + Session |
| API 文档 | utoipa (OpenAPI) |
### 前端 (TypeScript/React)
| 类别 | 技术 |
|------|------|
| 语言 | TypeScript 5.9 |
| 框架 | React 19 |
| 路由 | React Router v7 |
| 构建工具 | Vite 8 + SWC |
| UI 组件 | shadcn/ui + Tailwind CSS 4 |
| 状态管理 | TanStack Query |
| HTTP 客户端 | Axios + OpenAPI 生成 |
| Markdown | react-markdown + Shiki |
| 拖拽 | dnd-kit |
## 项目结构
```
code/
├── apps/ # 应用程序入口
│ ├── app/ # 主 Web 应用
│ ├── gitserver/ # Git HTTP/SSH 服务器
│ ├── git-hook/ # Git Hook 处理服务
│ ├── email/ # 邮件发送服务
│ ├── migrate/ # 数据库迁移工具
│ └── operator/ # Kubernetes 操作器
├── libs/ # 共享库
│ ├── api/ # REST API 路由与处理器
│ ├── models/ # 数据库模型 (SeaORM)
│ ├── service/ # 业务逻辑层
│ ├── db/ # 数据库连接池
│ ├── config/ # 配置管理
│ ├── session/ # 会话管理
│ ├── git/ # Git 操作封装
│ ├── room/ # 实时聊天服务
│ ├── queue/ # 消息队列
│ ├── webhook/ # Webhook 处理
│ ├── rpc/ # RPC 服务 (gRPC/Tonic)
│ ├── email/ # 邮件发送
│ ├── agent/ # AI Agent 集成
│ ├── avatar/ # 头像处理
│ ├── transport/ # 传输层
│ └── migrate/ # 迁移脚本
├── src/ # 前端源代码
│ ├── app/ # 页面路由组件
│ ├── components/ # 可复用组件
│ ├── contexts/ # React Context
│ ├── client/ # API 客户端 (OpenAPI 生成)
│ ├── hooks/ # 自定义 Hooks
│ └── lib/ # 工具函数
├── docker/ # Docker 配置
├── scripts/ # 构建脚本
├── openapi.json # OpenAPI 规范文件
└── Cargo.toml # Rust Workspace 配置
```
## 快速开始
### 环境要求
- **Rust**: 最新稳定版 (Edition 2024)
- **Node.js**: >= 20
- **pnpm**: >= 10
- **PostgreSQL**: >= 14
- **Redis**: >= 6
### 安装步骤
1. **克隆仓库**
```bash
git clone <repository-url>
cd code
```
2. **配置环境变量**
```bash
cp .env.example .env
# 编辑 .env 文件,配置数据库连接等信息
```
3. **启动数据库与 Redis**
```bash
# 使用 Docker 启动(推荐)
docker compose -f docker/docker-compose.yml up -d
```
4. **数据库迁移**
```bash
cargo run -p migrate
```
5. **启动后端服务**
```bash
cargo run -p app
```
6. **启动前端开发服务器**
```bash
pnpm install
pnpm dev
```
7. **访问应用**
- 前端: http://localhost:5173
- 后端 API: http://localhost:8080
## 开发指南
### 后端开发
```bash
# 运行所有测试
cargo test
# 运行特定模块测试
cargo test -p service
# 检查代码质量
cargo clippy --workspace
# 格式化代码
cargo fmt --workspace
# 生成 OpenAPI 文档
pnpm openapi:gen-json
```
### 前端开发
```bash
# 安装依赖
pnpm install
# 启动开发服务器
pnpm dev
# 构建生产版本
pnpm build
# 代码检查
pnpm lint
# 生成 OpenAPI 客户端
pnpm openapi:gen
```
### 数据库迁移
```bash
# 创建新迁移
cd libs/migrate && cargo run -- create <migration_name>
# 执行迁移
cargo run -p migrate
```
## 配置说明
### 必需配置项
| 变量名 | 说明 | 示例 |
|--------|------|------|
| `APP_DATABASE_URL` | PostgreSQL 连接 | `postgresql://user:pass@localhost/db` |
| `APP_REDIS_URL` | Redis 连接 | `redis://localhost:6379` |
| `APP_AI_API_KEY` | AI 服务 API Key | `sk-xxxxx` |
| `APP_SMTP_*` | SMTP 邮件配置 | 见 `.env.example` |
### 可选配置项
| 变量名 | 默认值 | 说明 |
|--------|--------|------|
| `APP_DATABASE_MAX_CONNECTIONS` | 10 | 数据库连接池大小 |
| `APP_LOG_LEVEL` | info | 日志级别 |
| `APP_QDRANT_URL` | - | 向量数据库地址 |
| `APP_REPOS_ROOT` | /data/repos | Git 仓库存储路径 |
完整配置请参考 `.env.example`
## API 文档
启动服务后访问 http://localhost:8080/swagger-ui 查看完整的 API 文档。
## 架构设计
### 后端分层架构
```
┌─────────────────────────────────────┐
│ apps/app │ ← 应用入口
├─────────────────────────────────────┤
│ libs/api │ ← HTTP 路由/Handler
├─────────────────────────────────────┤
│ libs/service │ ← 业务逻辑层
├─────────────────────────────────────┤
│ libs/models │ libs/db │ libs/git│ ← 数据访问层
├─────────────────────────────────────┤
│ PostgreSQL │ Redis │ Qdrant │ ← 存储层
└─────────────────────────────────────┘
```
### 前端目录结构
```
src/
├── app/ # 页面级组件 (按功能模块组织)
│ ├── project/ # 项目相关页面 (Issue、Settings)
│ ├── repository/ # 仓库相关页面 (PR、代码浏览)
│ └── settings/ # 用户设置
├── components/ # 可复用组件
│ ├── ui/ # 基础 UI 组件 (shadcn)
│ ├── project/ # 项目相关组件
│ ├── repository/ # 仓库相关组件
│ └── room/ # 聊天相关组件
├── contexts/ # React Context (用户、聊天室等)
├── client/ # OpenAPI 生成的客户端
└── lib/ # 工具函数与 Hooks
```
## 任务清单
项目当前开发任务详见 [task.md](./task.md),按优先级分为:
- **P0** — 阻塞性问题(核心流程不通)
- **P1** — 核心体验(关键功能)
- **P2** — 体验优化(增强功能)
## 许可证
[待添加]

34
apps/app/Cargo.toml Normal file
View File

@ -0,0 +1,34 @@
[package]
name = "app"
version.workspace = true
edition.workspace = true
authors.workspace = true
description.workspace = true
repository.workspace = true
readme.workspace = true
homepage.workspace = true
license.workspace = true
keywords.workspace = true
categories.workspace = true
documentation.workspace = true
[dependencies]
tokio = { workspace = true, features = ["full"] }
uuid = { workspace = true }
service = { workspace = true }
api = { workspace = true }
session = { workspace = true }
config = { workspace = true }
db = { workspace = true }
migrate = { workspace = true }
actix-web = { workspace = true }
actix-cors = { workspace = true }
futures = { workspace = true }
slog = "2"
anyhow = { workspace = true }
clap = { workspace = true }
sea-orm = { workspace = true }
serde_json = { workspace = true }
chrono = { workspace = true }
[lints]
workspace = true

12
apps/app/src/args.rs Normal file
View File

@ -0,0 +1,12 @@
use clap::Parser;
#[derive(Parser, Debug)]
#[command(name = "app")]
#[command(version)]
pub struct ServerArgs {
#[arg(long, short)]
pub bind: Option<String>,
#[arg(long)]
pub workers: Option<usize>,
}

126
apps/app/src/logging.rs Normal file
View File

@ -0,0 +1,126 @@
//! Structured HTTP request logging middleware using slog.
//!
//! Logs every incoming request with method, path, status code,
//! response time, client IP, and authenticated user ID.
use actix_web::dev::{Service, ServiceRequest, ServiceResponse, Transform};
use futures::future::{LocalBoxFuture, Ready, ok};
use session::SessionExt;
use slog::{error as slog_error, info as slog_info, warn as slog_warn};
use std::sync::Arc;
use std::task::{Context, Poll};
use std::time::Instant;
use uuid::Uuid;
/// Default log format: `{method} {path} {status} {duration_ms}ms`
pub struct RequestLogger {
log: slog::Logger,
}
impl RequestLogger {
pub fn new(log: slog::Logger) -> Self {
Self { log }
}
}
impl<S, B> Transform<S, ServiceRequest> for RequestLogger
where
S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = actix_web::Error> + 'static,
S::Future: 'static,
B: 'static,
{
type Response = ServiceResponse<B>;
type Error = actix_web::Error;
type Transform = RequestLoggerMiddleware<S>;
type InitError = ();
type Future = Ready<Result<Self::Transform, Self::InitError>>;
fn new_transform(&self, service: S) -> Self::Future {
ok(RequestLoggerMiddleware {
service: Arc::new(service),
log: self.log.clone(),
})
}
}
pub struct RequestLoggerMiddleware<S> {
service: Arc<S>,
log: slog::Logger,
}
impl<S> Clone for RequestLoggerMiddleware<S> {
fn clone(&self) -> Self {
Self {
service: self.service.clone(),
log: self.log.clone(),
}
}
}
impl<S, B> Service<ServiceRequest> for RequestLoggerMiddleware<S>
where
S: Service<ServiceRequest, Response = ServiceResponse<B>, Error = actix_web::Error> + 'static,
S::Future: 'static,
B: 'static,
{
type Response = ServiceResponse<B>;
type Error = actix_web::Error;
type Future = LocalBoxFuture<'static, Result<Self::Response, Self::Error>>;
fn poll_ready(&self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.service.poll_ready(cx)
}
fn call(&self, req: ServiceRequest) -> Self::Future {
let started = Instant::now();
let log = self.log.clone();
let method = req.method().to_string();
let path = req.path().to_string();
let query = req.query_string().to_string();
let remote = req
.connection_info()
.realip_remote_addr()
.map(|s| s.to_string())
.unwrap_or_else(|| "unknown".to_string());
let user_id: Option<Uuid> = req.get_session().user();
let full_path = if query.is_empty() {
path.clone()
} else {
format!("{}?{}", path, query)
};
// Clone the Arc<S> so it can be moved into the async block
let service = self.service.clone();
Box::pin(async move {
let res = service.call(req).await?;
let elapsed = started.elapsed();
let status = res.status();
let status_code = status.as_u16();
let is_health = path == "/health";
if !is_health {
let user_id_str = user_id
.map(|u: Uuid| u.to_string())
.unwrap_or_else(|| "-".to_string());
let log_message = format!(
"HTTP request | method={} | path={} | status={} | duration_ms={} | remote={} | user_id={}",
method,
full_path,
status_code,
elapsed.as_millis(),
remote,
user_id_str
);
match status_code {
200..=299 => slog_info!(&log, "{}", log_message),
400..=499 => slog_warn!(&log, "{}", log_message),
_ => slog_error!(&log, "{}", log_message),
}
}
Ok(res)
})
}
}

210
apps/app/src/main.rs Normal file
View File

@ -0,0 +1,210 @@
use actix_cors::Cors;
use actix_web::cookie::time::Duration;
use actix_web::middleware::Logger;
use actix_web::{App, HttpResponse, HttpServer, cookie::Key, web};
use clap::Parser;
use db::cache::AppCache;
use db::database::AppDatabase;
use sea_orm::ConnectionTrait;
use service::AppService;
use session::SessionMiddleware;
use session::config::{PersistentSession, SessionLifecycle, TtlExtensionPolicy};
use session::storage::RedisClusterSessionStore;
use slog::Drain;
mod args;
mod logging;
use args::ServerArgs;
use config::AppConfig;
use migrate::{Migrator, MigratorTrait};
#[derive(Clone)]
pub struct AppState {
pub db: AppDatabase,
pub cache: AppCache,
}
fn build_slog_logger(level: &str) -> slog::Logger {
let level_filter = match level {
"trace" => 0usize,
"debug" => 1usize,
"info" => 2usize,
"warn" => 3usize,
"error" => 4usize,
_ => 2usize,
};
struct StderrDrain(usize);
impl Drain for StderrDrain {
type Ok = ();
type Err = ();
#[inline]
fn log(&self, record: &slog::Record, _logger: &slog::OwnedKVList) -> Result<(), ()> {
let slog_level = match record.level() {
slog::Level::Trace => 0,
slog::Level::Debug => 1,
slog::Level::Info => 2,
slog::Level::Warning => 3,
slog::Level::Error => 4,
slog::Level::Critical => 5,
};
if slog_level < self.0 {
return Ok(());
}
let _ = eprintln!(
"{} [{}] {}:{} - {}",
chrono::Utc::now().format("%Y-%m-%dT%H:%M:%S%.3fZ"),
record.level().to_string(),
record
.file()
.rsplit_once('/')
.map(|(_, s)| s)
.unwrap_or(record.file()),
record.line(),
record.msg(),
);
Ok(())
}
}
let drain = StderrDrain(level_filter);
let drain = std::sync::Mutex::new(drain);
let drain = slog::Fuse::new(drain);
slog::Logger::root(drain, slog::o!())
}
fn build_session_key(cfg: &AppConfig) -> anyhow::Result<Key> {
if let Some(secret) = cfg.env.get("APP_SESSION_SECRET") {
let bytes: Vec<u8> = secret.as_bytes().iter().cycle().take(64).copied().collect();
return Ok(Key::from(&bytes));
}
Ok(Key::generate())
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let cfg = AppConfig::load();
let log_level = cfg.log_level().unwrap_or_else(|_| "info".to_string());
let log = build_slog_logger(&log_level);
slog::info!(
log,
"Starting {} {}",
cfg.app_name().unwrap_or_default(),
cfg.app_version().unwrap_or_default()
);
let db = AppDatabase::init(&cfg).await?;
slog::info!(log, "Database connected");
let redis_urls = cfg.redis_urls()?;
let store: RedisClusterSessionStore = RedisClusterSessionStore::new(redis_urls).await?;
slog::info!(log, "Redis connected");
let cache = AppCache::init(&cfg).await?;
slog::info!(log, "Cache initialized");
run_migrations(&db, &log).await?;
let session_key = build_session_key(&cfg)?;
let args = ServerArgs::parse();
let service = AppService::new(cfg.clone()).await?;
slog::info!(log, "AppService initialized");
let (shutdown_tx, shutdown_rx) = tokio::sync::broadcast::channel::<()>(1);
let worker_service = service.clone();
let log_for_http = log.clone();
let log_for_worker = log.clone();
let worker_handle = tokio::spawn(async move {
worker_service
.start_room_workers(shutdown_rx, log_for_worker)
.await
});
let bind_addr = args.bind.unwrap_or_else(|| "127.0.0.1:8080".to_string());
slog::info!(log, "Listening on {}", bind_addr);
HttpServer::new(move || {
let cors = Cors::default()
.allow_any_origin()
.allow_any_method()
.allow_any_header()
.supports_credentials()
.max_age(3600);
let session_mw = SessionMiddleware::builder(store.clone(), session_key.clone())
.cookie_name("id".to_string())
.cookie_path("/".to_string())
.cookie_secure(false)
.cookie_http_only(true)
.session_lifecycle(SessionLifecycle::PersistentSession(
PersistentSession::default()
.session_ttl(Duration::days(30))
.session_ttl_extension_policy(TtlExtensionPolicy::OnEveryRequest),
))
.build();
App::new()
.wrap(cors)
.wrap(session_mw)
.wrap(Logger::default().exclude("/health"))
.app_data(web::Data::new(AppState {
db: db.clone(),
cache: cache.clone(),
}))
.app_data(web::Data::new(service.clone()))
.app_data(web::Data::new(cfg.clone()))
.app_data(web::Data::new(db.clone()))
.app_data(web::Data::new(cache.clone()))
.wrap(logging::RequestLogger::new(log_for_http.clone()))
.route("/health", web::get().to(health_check))
.configure(api::route::init_routes)
})
.bind(&bind_addr)?
.run()
.await?;
slog::info!(log, "Server stopped, shutting down room workers");
let _ = shutdown_tx.send(());
let _ = worker_handle.await;
slog::info!(log, "Room workers stopped");
Ok(())
}
async fn run_migrations(db: &AppDatabase, log: &slog::Logger) -> anyhow::Result<()> {
slog::info!(log, "Running database migrations...");
Migrator::up(db.writer(), None)
.await
.map_err(|e| anyhow::anyhow!("Migration failed: {:?}", e))?;
slog::info!(log, "Migrations completed");
Ok(())
}
async fn health_check(state: web::Data<AppState>) -> HttpResponse {
let db_ok = db_ping(&state.db).await;
let cache_ok = cache_ping(&state.cache).await;
let healthy = db_ok && cache_ok;
if healthy {
HttpResponse::Ok().json(serde_json::json!({
"status": "ok",
"db": "ok",
"cache": "ok",
}))
} else {
HttpResponse::ServiceUnavailable().json(serde_json::json!({
"status": "unhealthy",
"db": if db_ok { "ok" } else { "error" },
"cache": if cache_ok { "ok" } else { "error" },
}))
}
}
async fn db_ping(db: &AppDatabase) -> bool {
db.query_one_raw(sea_orm::Statement::from_string(
sea_orm::DbBackend::Postgres,
"SELECT 1",
))
.await
.is_ok()
}
async fn cache_ping(cache: &AppCache) -> bool {
cache.conn().await.is_ok()
}

30
apps/email/Cargo.toml Normal file
View File

@ -0,0 +1,30 @@
[package]
name = "email-server"
version.workspace = true
edition.workspace = true
authors.workspace = true
description.workspace = true
repository.workspace = true
readme.workspace = true
homepage.workspace = true
license.workspace = true
keywords.workspace = true
categories.workspace = true
documentation.workspace = true
[[bin]]
name = "email-worker"
path = "src/main.rs"
[dependencies]
tokio = { workspace = true, features = ["full"] }
service = { workspace = true }
db = { workspace = true }
config = { workspace = true }
slog = { workspace = true }
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
chrono = { workspace = true, features = ["serde"] }
[lints]
workspace = true

84
apps/email/src/main.rs Normal file
View File

@ -0,0 +1,84 @@
use clap::Parser;
use config::AppConfig;
use service::AppService;
use slog::{Drain, OwnedKVList, Record};
#[derive(Parser, Debug)]
#[command(name = "email-worker")]
#[command(version)]
struct Args {
#[arg(long, default_value = "info")]
log_level: String,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args = Args::parse();
let cfg = AppConfig::load();
let log = build_logger(&args.log_level);
slog::info!(log, "Starting email worker");
let service = AppService::new(cfg).await?;
let (shutdown_tx, shutdown_rx) = tokio::sync::broadcast::channel::<()>(1);
let log_for_signal = log.clone();
tokio::spawn(async move {
tokio::signal::ctrl_c().await.ok();
slog::info!(log_for_signal, "shutting down email worker");
let _ = shutdown_tx.send(());
});
service.start_email_workers(shutdown_rx).await?;
slog::info!(log, "email worker stopped");
Ok(())
}
fn build_logger(level: &str) -> slog::Logger {
let level_filter = match level {
"trace" => 0usize,
"debug" => 1usize,
"info" => 2usize,
"warn" => 3usize,
"error" => 4usize,
_ => 2usize,
};
struct StderrDrain(usize);
impl Drain for StderrDrain {
type Ok = ();
type Err = ();
#[inline]
fn log(&self, record: &Record, _logger: &OwnedKVList) -> Result<(), ()> {
let slog_level = match record.level() {
slog::Level::Trace => 0,
slog::Level::Debug => 1,
slog::Level::Info => 2,
slog::Level::Warning => 3,
slog::Level::Error => 4,
slog::Level::Critical => 5,
};
if slog_level < self.0 {
return Ok(());
}
let _ = eprintln!(
"{} [{}] {}:{} - {}",
chrono::Utc::now().format("%Y-%m-%dT%H:%M:%S%.3fZ"),
record.level().to_string(),
record
.file()
.rsplit_once('/')
.map(|(_, s)| s)
.unwrap_or(record.file()),
record.line(),
record.msg(),
);
Ok(())
}
}
let drain = StderrDrain(level_filter);
let drain = std::sync::Mutex::new(drain);
let drain = slog::Fuse::new(drain);
slog::Logger::root(drain, slog::o!())
}

27
apps/git-hook/Cargo.toml Normal file
View File

@ -0,0 +1,27 @@
[package]
name = "git-hook"
version.workspace = true
edition.workspace = true
authors.workspace = true
description.workspace = true
repository.workspace = true
readme.workspace = true
homepage.workspace = true
license.workspace = true
keywords.workspace = true
categories.workspace = true
documentation.workspace = true
[dependencies]
tokio = { workspace = true, features = ["full"] }
git = { workspace = true }
db = { workspace = true }
config = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true, features = ["json"] }
anyhow = { workspace = true }
slog = { workspace = true }
clap = { workspace = true, features = ["derive"] }
tokio-util = { workspace = true }
chrono = { workspace = true, features = ["serde"] }
reqwest = { workspace = true }

10
apps/git-hook/src/args.rs Normal file
View File

@ -0,0 +1,10 @@
use clap::Parser;
#[derive(Parser, Debug)]
#[command(name = "git-hook")]
#[command(version)]
pub struct HookArgs {
/// Worker ID for this instance. Defaults to the HOOK_POOL_WORKER_ID env var or a generated UUID.
#[arg(long)]
pub worker_id: Option<String>,
}

142
apps/git-hook/src/main.rs Normal file
View File

@ -0,0 +1,142 @@
use clap::Parser;
use config::AppConfig;
use db::cache::AppCache;
use db::database::AppDatabase;
use git::hook::GitServiceHooks;
use slog::{Drain, OwnedKVList, Record};
use tokio::signal;
use tokio_util::sync::CancellationToken;
mod args;
use args::HookArgs;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// 1. Load configuration
let cfg = AppConfig::load();
// 2. Init slog logging
let log_level = cfg.log_level().unwrap_or_else(|_| "info".to_string());
let log = build_slog_logger(&log_level);
// 3. Connect to database
let db = AppDatabase::init(&cfg).await?;
slog::info!(log, "database connected");
// 4. Connect to Redis cache (also provides the cluster pool for hook queue)
let cache = AppCache::init(&cfg).await?;
slog::info!(log, "cache connected");
// 5. Parse CLI args
let args = HookArgs::parse();
slog::info!(log, "git-hook worker starting";
"worker_id" => %args.worker_id.unwrap_or_else(|| "default".to_string())
);
// 5. Build HTTP client for webhook delivery
let http = reqwest::Client::builder()
.user_agent("Code-Git-Hook/1.0")
.build()
.unwrap_or_else(|_| reqwest::Client::new());
// 6. Build and run git hook service
let hooks = GitServiceHooks::new(
db,
cache.clone(),
cache.redis_pool().clone(),
log.clone(),
cfg,
std::sync::Arc::new(http),
);
let cancel = CancellationToken::new();
let cancel_clone = cancel.clone();
// Spawn signal handler
let log_clone = log.clone();
tokio::spawn(async move {
let ctrl_c = async {
signal::ctrl_c()
.await
.expect("failed to install CTRL+C handler");
};
#[cfg(unix)]
let term = async {
use tokio::signal::unix::{SignalKind, signal};
let mut sig =
signal(SignalKind::terminate()).expect("failed to install SIGTERM handler");
sig.recv().await;
};
#[cfg(not(unix))]
let term = std::future::pending::<()>();
tokio::select! {
_ = ctrl_c => {
slog::info!(log_clone, "received SIGINT, initiating shutdown");
}
_ = term => {
slog::info!(log_clone, "received SIGTERM, initiating shutdown");
}
}
cancel_clone.cancel();
});
hooks.run(cancel).await?;
slog::info!(log, "git-hook worker stopped");
Ok(())
}
fn build_slog_logger(level: &str) -> slog::Logger {
let level_filter = match level {
"trace" => 0usize,
"debug" => 1usize,
"info" => 2usize,
"warn" => 3usize,
"error" => 4usize,
_ => 2usize,
};
struct StderrDrain(usize);
impl Drain for StderrDrain {
type Ok = ();
type Err = ();
#[inline]
fn log(&self, record: &Record, _logger: &OwnedKVList) -> Result<(), ()> {
let slog_level = match record.level() {
slog::Level::Trace => 0,
slog::Level::Debug => 1,
slog::Level::Info => 2,
slog::Level::Warning => 3,
slog::Level::Error => 4,
slog::Level::Critical => 5,
};
if slog_level < self.0 {
return Ok(());
}
let _ = eprintln!(
"{} [{}] {}:{} - {}",
chrono::Utc::now().format("%Y-%m-%dT%H:%M:%S%.3fZ"),
record.level().to_string(),
record
.file()
.rsplit_once('/')
.map(|(_, s)| s)
.unwrap_or(record.file()),
record.line(),
record.msg(),
);
Ok(())
}
}
let drain = StderrDrain(level_filter);
let drain = std::sync::Mutex::new(drain);
let drain = slog::Fuse::new(drain);
slog::Logger::root(drain, slog::o!())
}

30
apps/gitserver/Cargo.toml Normal file
View File

@ -0,0 +1,30 @@
[package]
name = "gitserver"
version.workspace = true
edition.workspace = true
authors.workspace = true
description.workspace = true
repository.workspace = true
readme.workspace = true
homepage.workspace = true
license.workspace = true
keywords.workspace = true
categories.workspace = true
documentation.workspace = true
[[bin]]
name = "gitserver"
path = "src/main.rs"
[dependencies]
tokio = { workspace = true, features = ["full"] }
git = { workspace = true }
db = { workspace = true }
config = { workspace = true }
slog = { workspace = true }
anyhow = { workspace = true }
clap = { workspace = true, features = ["derive"] }
chrono = { workspace = true, features = ["serde"] }
[lints]
workspace = true

View File

@ -0,0 +1,94 @@
use clap::Parser;
use config::AppConfig;
use slog::{Drain, OwnedKVList, Record};
#[derive(Parser, Debug)]
#[command(name = "gitserver")]
#[command(version)]
struct Args {
#[arg(long, default_value = "info")]
log_level: String,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args = Args::parse();
let cfg = AppConfig::load();
let log = build_logger(&args.log_level);
let http_handle = tokio::spawn(git::http::run_http(cfg.clone(), log.clone()));
let ssh_handle = tokio::spawn(git::ssh::run_ssh(cfg, log.clone()));
tokio::select! {
result = http_handle => {
match result {
Ok(Ok(())) => slog::info!(log, "HTTP server stopped"),
Ok(Err(e)) => slog::error!(log, "HTTP server error: {}", e),
Err(e) => slog::error!(log, "HTTP server task panicked: {}", e),
}
}
result = ssh_handle => {
match result {
Ok(Ok(())) => slog::info!(log, "SSH server stopped"),
Ok(Err(e)) => slog::error!(log, "SSH server error: {}", e),
Err(e) => slog::error!(log, "SSH server task panicked: {}", e),
}
}
_ = tokio::signal::ctrl_c() => {
slog::info!(log, "received shutdown signal");
}
}
slog::info!(log, "shutting down");
Ok(())
}
fn build_logger(level: &str) -> slog::Logger {
let level_filter = match level {
"trace" => 0usize,
"debug" => 1usize,
"info" => 2usize,
"warn" => 3usize,
"error" => 4usize,
_ => 2usize,
};
struct StderrDrain(usize);
impl Drain for StderrDrain {
type Ok = ();
type Err = ();
#[inline]
fn log(&self, record: &Record, _logger: &OwnedKVList) -> Result<(), ()> {
let slog_level = match record.level() {
slog::Level::Trace => 0,
slog::Level::Debug => 1,
slog::Level::Info => 2,
slog::Level::Warning => 3,
slog::Level::Error => 4,
slog::Level::Critical => 5,
};
if slog_level < self.0 {
return Ok(());
}
let _ = eprintln!(
"{} [{}] {}:{} - {}",
chrono::Utc::now().format("%Y-%m-%dT%H:%M:%S%.3fZ"),
record.level().to_string(),
record
.file()
.rsplit_once('/')
.map(|(_, s)| s)
.unwrap_or(record.file()),
record.line(),
record.msg(),
);
Ok(())
}
}
let drain = StderrDrain(level_filter);
let drain = std::sync::Mutex::new(drain);
let drain = slog::Fuse::new(drain);
slog::Logger::root(drain, slog::o!())
}

13
apps/migrate/Cargo.toml Normal file
View File

@ -0,0 +1,13 @@
[package]
name = "migrate-cli"
version.workspace = true
edition.workspace = true
[dependencies]
migrate.workspace = true
sea-orm = { workspace = true, features = ["sqlx-all", "runtime-tokio"] }
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
anyhow.workspace = true
clap.workspace = true
dotenvy.workspace = true
config = { workspace = true }

102
apps/migrate/src/main.rs Normal file
View File

@ -0,0 +1,102 @@
use anyhow::Context;
use clap::Command;
use migrate::MigratorTrait;
use sea_orm::{Database, DatabaseConnection};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
dotenvy::dotenv().ok();
config::AppConfig::load();
let cmd = Command::new("migrate")
.about("Database migration CLI")
.arg(
clap::Arg::new("steps")
.help("Number of migrations (for up/down)")
.required(false)
.index(1),
)
.subcommand(Command::new("up").about("Apply pending migrations"))
.subcommand(Command::new("down").about("Revert applied migrations"))
.subcommand(Command::new("fresh").about("Drop all tables and re-apply"))
.subcommand(Command::new("refresh").about("Revert all then re-apply"))
.subcommand(Command::new("reset").about("Revert all applied migrations"))
.subcommand(Command::new("status").about("Show migration status"))
.try_get_matches()
.map_err(|e| anyhow::anyhow!("{}", e))?;
let db_url = config::AppConfig::load().database_url()?;
let db: DatabaseConnection = Database::connect(&db_url).await?;
match cmd.subcommand_name() {
Some("up") => {
let steps = cmd
.get_one::<String>("steps")
.and_then(|s| s.parse().ok())
.unwrap_or(0);
run_up(&db, steps).await?;
}
Some("down") => {
let steps = cmd
.get_one::<String>("steps")
.and_then(|s| s.parse().ok())
.unwrap_or(1);
run_down(&db, steps).await?;
}
Some("fresh") => run_fresh(&db).await?,
Some("refresh") => run_refresh(&db).await?,
Some("reset") => run_reset(&db).await?,
Some("status") => run_status(&db).await?,
_ => {
eprintln!(
"Usage: migrate <command>\nCommands: up, down, fresh, refresh, reset, status"
);
std::process::exit(1);
}
}
Ok(())
}
async fn run_up(db: &DatabaseConnection, steps: u32) -> anyhow::Result<()> {
migrate::Migrator::up(db, if steps == 0 { None } else { Some(steps) })
.await
.context("failed to run migrations up")?;
Ok(())
}
async fn run_down(db: &DatabaseConnection, steps: u32) -> anyhow::Result<()> {
migrate::Migrator::down(db, Some(steps))
.await
.context("failed to run migrations down")?;
Ok(())
}
async fn run_fresh(db: &DatabaseConnection) -> anyhow::Result<()> {
migrate::Migrator::fresh(db)
.await
.context("failed to run migrations fresh")?;
Ok(())
}
async fn run_refresh(db: &DatabaseConnection) -> anyhow::Result<()> {
migrate::Migrator::refresh(db)
.await
.context("failed to run migrations refresh")?;
Ok(())
}
async fn run_reset(db: &DatabaseConnection) -> anyhow::Result<()> {
migrate::Migrator::reset(db)
.await
.context("failed to run migrations reset")?;
Ok(())
}
async fn run_status(db: &DatabaseConnection) -> anyhow::Result<()> {
migrate::Migrator::status(db)
.await
.context("failed to get migration status")?;
Ok(())
}

30
apps/operator/Cargo.toml Normal file
View File

@ -0,0 +1,30 @@
[package]
name = "operator"
version.workspace = true
edition.workspace = true
authors.workspace = true
description.workspace = true
repository.workspace = true
readme.workspace = true
homepage.workspace = true
license.workspace = true
keywords.workspace = true
categories.workspace = true
documentation.workspace = true
[dependencies]
kube = { workspace = true }
k8s-openapi = { workspace = true }
serde = { workspace = true }
serde_json.workspace = true
serde_yaml = { workspace = true }
tokio = { workspace = true, features = ["rt-multi-thread", "macros", "sync"] }
anyhow.workspace = true
futures.workspace = true
tracing.workspace = true
tracing-subscriber.workspace = true
chrono = { workspace = true }
uuid = { workspace = true, features = ["v4"] }
[lints]
workspace = true

View File

@ -0,0 +1,44 @@
//! Shared reconcile context.
use kube::Client;
/// Context passed to every reconcile call.
#[derive(Clone)]
pub struct ReconcileCtx {
pub client: Client,
/// Default image registry prefix (e.g. "myapp/").
pub image_prefix: String,
/// Operator's own namespace.
pub operator_namespace: String,
}
impl ReconcileCtx {
pub async fn from_env() -> anyhow::Result<Self> {
let client = Client::try_default().await?;
let ns = std::env::var("POD_NAMESPACE").unwrap_or_else(|_| "default".to_string());
let prefix =
std::env::var("OPERATOR_IMAGE_PREFIX").unwrap_or_else(|_| "myapp/".to_string());
Ok(Self {
client,
image_prefix: prefix,
operator_namespace: ns,
})
}
/// Prepend image_prefix to an unqualified image name.
/// E.g. "app:latest" → "myapp/app:latest"
pub fn resolve_image(&self, image: &str) -> String {
// If it already has a registry/domain component, leave it alone.
if image.contains('/') && !image.starts_with(&self.image_prefix) {
image.to_string()
} else if image.starts_with(&self.image_prefix) {
image.to_string()
} else {
// Unqualified name: prepend prefix.
format!("{}{}", self.image_prefix, image)
}
}
}
pub type ReconcileState = ReconcileCtx;

View File

@ -0,0 +1,221 @@
//! Controller for the `App` CRD — manages Deployment + Service.
use crate::context::ReconcileState;
use crate::controller::helpers::{
child_meta, env_var_to_json, merge_env, owner_ref, query_deployment_status, std_labels,
};
use crate::crd::{App, AppSpec};
use serde_json::{Value, json};
use std::sync::Arc;
use tracing::info;
/// Reconcile an App resource: create/update Deployment + Service.
pub async fn reconcile(app: Arc<App>, ctx: Arc<ReconcileState>) -> Result<(), kube::Error> {
let ns = app.metadata.namespace.as_deref().unwrap_or("default");
let name = app.metadata.name.as_deref().unwrap_or("");
let spec = &app.spec;
let client = &ctx.client;
let or = owner_ref(&app.metadata, &app.api_version, &app.kind);
let labels = std_labels();
// ---- Deployment ----
let deployment = build_deployment(ns, name, spec, &or, &labels);
apply_deployment(client, ns, name, &deployment).await?;
// ---- Service ----
let service = build_service(ns, name, &or, &labels);
apply_service(client, ns, name, &service).await?;
// ---- Status patch ----
let (ready_replicas, phase) = query_deployment_status(client, ns, name).await?;
let status = json!({
"status": {
"readyReplicas": ready_replicas,
"phase": phase
}
});
patch_status::<App>(client, ns, name, &status).await?;
Ok(())
}
fn build_deployment(
ns: &str,
name: &str,
spec: &AppSpec,
or: &crate::crd::OwnerReference,
labels: &std::collections::BTreeMap<String, String>,
) -> Value {
let env = merge_env(&[], &spec.env);
let image = if spec.image.is_empty() {
"myapp/app:latest".to_string()
} else {
spec.image.clone()
};
let pull = if spec.image_pull_policy.is_empty() {
"IfNotPresent".to_string()
} else {
spec.image_pull_policy.clone()
};
let resources = build_resources(&spec.resources);
let liveness = spec.liveness_probe.as_ref().map(|p| {
json!({
"httpGet": { "path": p.path, "port": p.port },
"initialDelaySeconds": p.initial_delay_seconds,
"periodSeconds": 10,
})
});
let readiness = spec.readiness_probe.as_ref().map(|p| {
json!({
"httpGet": { "path": p.path, "port": p.port },
"initialDelaySeconds": p.initial_delay_seconds,
"periodSeconds": 5,
})
});
json!({
"metadata": child_meta(name, ns, or, labels.clone()),
"spec": {
"replicas": spec.replicas,
"selector": { "matchLabels": labels },
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": { "maxSurge": 1, "maxUnavailable": 0 }
},
"template": {
"metadata": { "labels": labels.clone() },
"spec": {
"containers": [{
"name": "app",
"image": image,
"ports": [{ "containerPort": 8080 }],
"env": env.iter().map(env_var_to_json).collect::<Vec<_>>(),
"imagePullPolicy": pull,
"resources": resources,
"livenessProbe": liveness,
"readinessProbe": readiness,
}]
}
}
}
})
}
fn build_service(
ns: &str,
name: &str,
or: &crate::crd::OwnerReference,
labels: &std::collections::BTreeMap<String, String>,
) -> Value {
json!({
"metadata": child_meta(name, ns, or, labels.clone()),
"spec": {
"ports": [{ "port": 80, "targetPort": 8080, "name": "http" }],
"selector": labels.clone(),
"type": "ClusterIP"
}
})
}
pub(crate) fn build_resources(res: &Option<crate::crd::ResourceRequirements>) -> Value {
match res {
Some(r) => {
let mut out = serde_json::Map::new();
if let Some(ref req) = r.requests {
let mut req_map = serde_json::Map::new();
if let Some(ref cpu) = req.cpu {
req_map.insert("cpu".to_string(), json!(cpu));
}
if let Some(ref mem) = req.memory {
req_map.insert("memory".to_string(), json!(mem));
}
if !req_map.is_empty() {
out.insert("requests".to_string(), Value::Object(req_map));
}
}
if let Some(ref lim) = r.limits {
let mut lim_map = serde_json::Map::new();
if let Some(ref cpu) = lim.cpu {
lim_map.insert("cpu".to_string(), json!(cpu));
}
if let Some(ref mem) = lim.memory {
lim_map.insert("memory".to_string(), json!(mem));
}
if !lim_map.is_empty() {
out.insert("limits".to_string(), Value::Object(lim_map));
}
}
if out.is_empty() {
json!({})
} else {
Value::Object(out)
}
}
None => json!({}),
}
}
pub(crate) async fn apply_deployment(
client: &kube::Client,
ns: &str,
name: &str,
body: &Value,
) -> Result<(), kube::Error> {
let api: kube::Api<crate::crd::JsonResource> = kube::Api::namespaced(client.clone(), ns);
let jr = crate::crd::JsonResource::new(Default::default(), body.clone());
match api.get(name).await {
Ok(_) => {
info!(name, ns, "updating app deployment");
let _ = api
.replace(name, &kube::api::PostParams::default(), &jr)
.await?;
}
Err(kube::Error::Api(e)) if e.code == 404 => {
info!(name, ns, "creating app deployment");
let _ = api.create(&kube::api::PostParams::default(), &jr).await?;
}
Err(e) => return Err(e),
}
Ok(())
}
pub(crate) async fn apply_service(
client: &kube::Client,
ns: &str,
name: &str,
body: &Value,
) -> Result<(), kube::Error> {
let api: kube::Api<crate::crd::JsonResource> = kube::Api::namespaced(client.clone(), ns);
let jr = crate::crd::JsonResource::new(Default::default(), body.clone());
match api.get(name).await {
Ok(_) => {
let _ = api
.replace(name, &kube::api::PostParams::default(), &jr)
.await?;
}
Err(kube::Error::Api(e)) if e.code == 404 => {
let _ = api.create(&kube::api::PostParams::default(), &jr).await?;
}
Err(e) => return Err(e),
}
Ok(())
}
pub(crate) async fn patch_status<T: Clone + serde::de::DeserializeOwned + std::fmt::Debug>(
client: &kube::Client,
ns: &str,
name: &str,
body: &Value,
) -> Result<(), kube::Error> {
let api: kube::Api<crate::crd::JsonResource> = kube::Api::namespaced(client.clone(), ns);
let _ = api
.patch_status(
name,
&kube::api::PatchParams::default(),
&kube::api::Patch::Merge(body),
)
.await?;
Ok(())
}

View File

@ -0,0 +1,68 @@
//! Controller for the `EmailWorker` CRD — Deployment only.
use crate::context::ReconcileState;
use crate::controller::app::{apply_deployment, patch_status};
use crate::controller::helpers::{child_meta, env_var_to_json, merge_env, owner_ref, query_deployment_status, std_labels};
use crate::crd::{EmailWorker, EmailWorkerSpec};
use serde_json::{Value, json};
use std::sync::Arc;
pub async fn reconcile(ew: Arc<EmailWorker>, ctx: Arc<ReconcileState>) -> Result<(), kube::Error> {
let ns = ew.metadata.namespace.as_deref().unwrap_or("default");
let name = ew.metadata.name.as_deref().unwrap_or("");
let spec = &ew.spec;
let client = &ctx.client;
let or = owner_ref(&ew.metadata, &ew.api_version, &ew.kind);
let labels = std_labels();
let deployment = build_deployment(ns, name, spec, &or, &labels);
apply_deployment(client, ns, name, &deployment).await?;
let (ready_replicas, phase) = query_deployment_status(client, ns, name).await?;
let status = json!({ "status": { "readyReplicas": ready_replicas, "phase": phase } });
patch_status::<EmailWorker>(client, ns, name, &status).await?;
Ok(())
}
fn build_deployment(
ns: &str,
name: &str,
spec: &EmailWorkerSpec,
or: &crate::crd::OwnerReference,
labels: &std::collections::BTreeMap<String, String>,
) -> Value {
let env = merge_env(&[], &spec.env);
let image = if spec.image.is_empty() {
"myapp/email-worker:latest".to_string()
} else {
spec.image.clone()
};
let pull = if spec.image_pull_policy.is_empty() {
"IfNotPresent".to_string()
} else {
spec.image_pull_policy.clone()
};
let resources = super::app::build_resources(&spec.resources);
json!({
"metadata": child_meta(name, ns, or, labels.clone()),
"spec": {
"replicas": 1,
"selector": { "matchLabels": labels },
"template": {
"metadata": { "labels": labels.clone() },
"spec": {
"containers": [{
"name": "email-worker",
"image": image,
"env": env.iter().map(env_var_to_json).collect::<Vec<_>>(),
"imagePullPolicy": pull,
"resources": resources,
}]
}
}
}
})
}

View File

@ -0,0 +1,137 @@
//! Controller for the `GitHook` CRD — Deployment + ConfigMap.
use crate::context::ReconcileState;
use crate::controller::app::{apply_deployment, patch_status};
use crate::controller::helpers::{child_meta, env_var_to_json, merge_env, owner_ref, query_deployment_status, std_labels};
use crate::crd::{GitHook, GitHookSpec, JsonResource};
use serde_json::{Value, json};
use std::sync::Arc;
use tracing::info;
pub async fn reconcile(gh: Arc<GitHook>, ctx: Arc<ReconcileState>) -> Result<(), kube::Error> {
let ns = gh.metadata.namespace.as_deref().unwrap_or("default");
let name = gh.metadata.name.as_deref().unwrap_or("");
let spec = &gh.spec;
let client = &ctx.client;
let or = owner_ref(&gh.metadata, &gh.api_version, &gh.kind);
let labels = std_labels();
let cm_name = format!("{}-config", name);
// ---- ConfigMap ----
let configmap = build_configmap(ns, &cm_name, &or, &labels);
apply_configmap(client, ns, &cm_name, &configmap).await?;
// ---- Deployment ----
let deployment = build_deployment(ns, name, &cm_name, spec, &or, &labels);
apply_deployment(client, ns, name, &deployment).await?;
let (ready_replicas, phase) = query_deployment_status(client, ns, name).await?;
let status = json!({ "status": { "readyReplicas": ready_replicas, "phase": phase } });
patch_status::<GitHook>(client, ns, name, &status).await?;
Ok(())
}
fn build_configmap(
ns: &str,
cm_name: &str,
or: &crate::crd::OwnerReference,
labels: &std::collections::BTreeMap<String, String>,
) -> Value {
let pool_config = serde_yaml::to_string(&serde_json::json!({
"max_concurrent": 8,
"cpu_threshold": 80.0,
"redis_list_prefix": "{hook}",
"redis_log_channel": "hook:logs",
"redis_block_timeout_secs": 5,
"redis_max_retries": 3,
}))
.unwrap_or_default();
json!({
"metadata": child_meta(cm_name, ns, or, labels.clone()),
"data": {
"pool.yaml": pool_config
}
})
}
fn build_deployment(
ns: &str,
name: &str,
cm_name: &str,
spec: &GitHookSpec,
or: &crate::crd::OwnerReference,
labels: &std::collections::BTreeMap<String, String>,
) -> Value {
let env = merge_env(&[], &spec.env);
let image = if spec.image.is_empty() {
"myapp/git-hook:latest".to_string()
} else {
spec.image.clone()
};
let pull = if spec.image_pull_policy.is_empty() {
"IfNotPresent".to_string()
} else {
spec.image_pull_policy.clone()
};
let resources = super::app::build_resources(&spec.resources);
// Add WORKER_ID env
let worker_id = spec
.worker_id
.clone()
.unwrap_or_else(|| uuid::Uuid::new_v4().to_string());
let mut env_vars: Vec<serde_json::Value> = env.iter().map(env_var_to_json).collect();
env_vars.push(json!({ "name": "HOOK_POOL_WORKER_ID", "value": worker_id }));
json!({
"metadata": child_meta(name, ns, or, labels.clone()),
"spec": {
"replicas": 1,
"selector": { "matchLabels": labels },
"template": {
"metadata": { "labels": labels.clone() },
"spec": {
"containers": [{
"name": "git-hook",
"image": image,
"env": env_vars,
"imagePullPolicy": pull,
"resources": resources,
"volumeMounts": [{ "name": "hook-config", "mountPath": "/config" }]
}],
"volumes": [{
"name": "hook-config",
"configMap": { "name": cm_name }
}]
}
}
}
})
}
async fn apply_configmap(
client: &kube::Client,
ns: &str,
name: &str,
body: &Value,
) -> Result<(), kube::Error> {
let api: kube::Api<JsonResource> = kube::Api::namespaced(client.clone(), ns);
let jr = JsonResource::new(Default::default(), body.clone());
match api.get(name).await {
Ok(_) => {
let _ = api
.replace(name, &kube::api::PostParams::default(), &jr)
.await?;
Ok(())
}
Err(kube::Error::Api(e)) if e.code == 404 => {
info!(name, ns, "creating git-hook configmap");
let _ = api.create(&kube::api::PostParams::default(), &jr).await?;
Ok(())
}
Err(e) => Err(e),
}
}

View File

@ -0,0 +1,164 @@
//! Controller for the `GitServer` CRD — Deployment + HTTP Svc + SSH Svc + PVC.
use crate::context::ReconcileState;
use crate::controller::app::{apply_deployment, apply_service, patch_status};
use crate::controller::helpers::{child_meta, env_var_to_json, merge_env, owner_ref, query_deployment_status, std_labels};
use crate::crd::{GitServer, GitServerSpec};
use serde_json::{Value, json};
use std::sync::Arc;
use tracing::info;
pub async fn reconcile(gs: Arc<GitServer>, ctx: Arc<ReconcileState>) -> Result<(), kube::Error> {
let ns = gs.metadata.namespace.as_deref().unwrap_or("default");
let name = gs.metadata.name.as_deref().unwrap_or("");
let spec = &gs.spec;
let client = &ctx.client;
let or = owner_ref(&gs.metadata, &gs.api_version, &gs.kind);
let labels = std_labels();
// ---- PVC ----
let pvc = build_pvc(ns, name, spec, &or, &labels);
apply_pvc(client, ns, &format!("{}-repos", name), &pvc).await?;
// ---- Deployment ----
let deployment = build_deployment(ns, name, spec, &or, &labels);
apply_deployment(client, ns, name, &deployment).await?;
// ---- HTTP Service ----
let http_svc = build_http_service(ns, name, spec, &or, &labels);
apply_service(client, ns, &format!("{}-http", name), &http_svc).await?;
// ---- SSH Service ----
let ssh_svc = build_ssh_service(ns, name, spec, &or, &labels);
apply_service(client, ns, &format!("{}-ssh", name), &ssh_svc).await?;
// ---- Status ----
let (ready_replicas, phase) = query_deployment_status(client, ns, name).await?;
let status = json!({ "status": { "readyReplicas": ready_replicas, "phase": phase } });
patch_status::<GitServer>(client, ns, name, &status).await?;
Ok(())
}
fn build_deployment(
ns: &str,
name: &str,
spec: &GitServerSpec,
or: &crate::crd::OwnerReference,
labels: &std::collections::BTreeMap<String, String>,
) -> Value {
let env = merge_env(&[], &spec.env);
let image = if spec.image.is_empty() {
"myapp/gitserver:latest".to_string()
} else {
spec.image.clone()
};
let pull = if spec.image_pull_policy.is_empty() {
"IfNotPresent".to_string()
} else {
spec.image_pull_policy.clone()
};
let resources = super::app::build_resources(&spec.resources);
json!({
"metadata": child_meta(name, ns, or, labels.clone()),
"spec": {
"replicas": 1,
"selector": { "matchLabels": labels },
"template": {
"metadata": { "labels": labels.clone() },
"spec": {
"containers": [{
"name": "gitserver",
"image": image,
"ports": [
{ "name": "http", "containerPort": spec.http_port },
{ "name": "ssh", "containerPort": spec.ssh_port }
],
"env": env.iter().map(env_var_to_json).collect::<Vec<_>>(),
"imagePullPolicy": pull,
"resources": resources,
"volumeMounts": [{ "name": "git-repos", "mountPath": "/data/repos" }]
}],
"volumes": [{
"name": "git-repos",
"persistentVolumeClaim": { "claimName": format!("{}-repos", name) }
}]
}
}
}
})
}
fn build_http_service(
ns: &str,
name: &str,
spec: &GitServerSpec,
or: &crate::crd::OwnerReference,
labels: &std::collections::BTreeMap<String, String>,
) -> Value {
json!({
"metadata": child_meta(&format!("{}-http", name), ns, or, labels.clone()),
"spec": {
"ports": [{ "port": spec.http_port, "targetPort": spec.http_port, "name": "http" }],
"selector": labels.clone(),
"type": "ClusterIP"
}
})
}
fn build_ssh_service(
ns: &str,
name: &str,
spec: &GitServerSpec,
or: &crate::crd::OwnerReference,
labels: &std::collections::BTreeMap<String, String>,
) -> Value {
json!({
"metadata": child_meta(&format!("{}-ssh", name), ns, or, labels.clone()),
"spec": {
"ports": [{ "port": spec.ssh_port, "targetPort": spec.ssh_port, "name": "ssh" }],
"selector": labels.clone(),
"type": spec.ssh_service_type
}
})
}
fn build_pvc(
ns: &str,
name: &str,
spec: &GitServerSpec,
or: &crate::crd::OwnerReference,
labels: &std::collections::BTreeMap<String, String>,
) -> Value {
json!({
"metadata": child_meta(&format!("{}-repos", name), ns, or, labels.clone()),
"spec": {
"accessModes": ["ReadWriteOnce"],
"resources": { "requests": { "storage": spec.storage_size } }
}
})
}
async fn apply_pvc(
client: &kube::Client,
ns: &str,
name: &str,
body: &Value,
) -> Result<(), kube::Error> {
let api: kube::Api<crate::crd::JsonResource> = kube::Api::namespaced(client.clone(), ns);
let jr = crate::crd::JsonResource::new(Default::default(), body.clone());
match api.get(name).await {
Ok(_) => {
/* already exists, don't replace PVC */
Ok(())
}
Err(kube::Error::Api(e)) if e.code == 404 => {
info!(name, ns, "creating gitserver pvc");
let _ = api.create(&kube::api::PostParams::default(), &jr).await?;
Ok(())
}
Err(e) => Err(e),
}
}

View File

@ -0,0 +1,96 @@
//! Shared helpers for building Kubernetes child resources as JSON objects.
use crate::crd::{EnvVar, K8sObjectMeta, OwnerReference};
/// Query a Deployment's actual status and derive the CR's phase.
pub async fn query_deployment_status(
client: &kube::Client,
ns: &str,
name: &str,
) -> Result<(i32, String), kube::Error> {
use k8s_openapi::api::apps::v1::Deployment;
let api: kube::Api<Deployment> = kube::Api::namespaced(client.clone(), ns);
match api.get(name).await {
Ok(d) => {
let ready = d.status.as_ref().and_then(|s| s.ready_replicas).unwrap_or(0);
let phase = if ready > 0 { "Running" } else { "Pending" };
Ok((ready, phase.to_string()))
}
Err(kube::Error::Api(e)) if e.code == 404 => Ok((0, "Pending".to_string())),
Err(e) => Err(e),
}
}
/// Labels applied to every child resource.
pub fn std_labels() -> std::collections::BTreeMap<String, String> {
let mut m = std::collections::BTreeMap::new();
m.insert(
"app.kubernetes.io/managed-by".to_string(),
"code-operator".to_string(),
);
m.insert(
"app.kubernetes.io/part-of".to_string(),
"code-system".to_string(),
);
m
}
pub fn child_meta(
name: &str,
namespace: &str,
owner: &OwnerReference,
labels: std::collections::BTreeMap<String, String>,
) -> K8sObjectMeta {
K8sObjectMeta {
name: Some(name.to_string()),
namespace: Some(namespace.to_string()),
labels: Some(labels),
owner_references: Some(vec![owner.clone().into()]),
..Default::default()
}
}
pub fn owner_ref(parent: &K8sObjectMeta, api_version: &str, kind: &str) -> OwnerReference {
OwnerReference {
api_version: api_version.to_string(),
kind: kind.to_string(),
name: parent.name.clone().unwrap_or_default(),
uid: parent.uid.clone().unwrap_or_default(),
controller: Some(true),
block_owner_deletion: Some(true),
}
}
/// Merge env vars (global first, then local overrides).
pub fn merge_env(global: &[EnvVar], local: &[EnvVar]) -> Vec<EnvVar> {
use std::collections::BTreeMap;
let mut map: BTreeMap<String, EnvVar> = global
.iter()
.cloned()
.map(|e| (e.name.clone(), e))
.collect();
for e in local {
map.insert(e.name.clone(), e.clone());
}
map.into_values().collect()
}
pub fn env_var_to_json(e: &EnvVar) -> serde_json::Value {
use serde_json::json;
let mut m = json!({ "name": e.name });
if let Some(ref v) = e.value {
m["value"] = json!(v);
}
if let Some(ref src) = e.value_from {
if let Some(ref sr) = src.secret_ref {
m["valueFrom"] = json!({
"secretRef": {
"name": sr.secret_name,
"key": sr.secret_key,
}
});
}
}
m
}

View File

@ -0,0 +1,171 @@
//! Controller for the `Migrate` CRD — creates a one-shot Job on reconcile.
//!
//! The Job is re-created on every reconcile (idempotent). Once the Job
//! succeeds, the Migrate status is patched to "Completed".
use crate::context::ReconcileState;
use crate::controller::helpers::{child_meta, env_var_to_json, merge_env, owner_ref, std_labels};
use crate::crd::{JsonResource, K8sObjectMeta, Migrate, MigrateSpec};
use chrono::Utc;
use serde_json::{Value, json};
use std::sync::Arc;
use tracing::info;
pub async fn reconcile(mig: Arc<Migrate>, ctx: Arc<ReconcileState>) -> Result<(), kube::Error> {
let ns = mig.metadata.namespace.as_deref().unwrap_or("default");
let name = mig.metadata.name.as_deref().unwrap_or("");
let spec = &mig.spec;
let client = &ctx.client;
let or = owner_ref(&mig.metadata, &mig.api_version, &mig.kind);
let labels = std_labels();
let job_meta = child_meta(name, ns, &or, labels.clone());
let job = build_job(spec, job_meta, &labels);
// Use JsonResource for Job create/replace (spec part)
let jobs_api: kube::Api<JsonResource> = kube::Api::namespaced(client.clone(), ns);
match jobs_api.get(name).await {
Ok(_) => {
info!(name, ns, "replacing migrate job");
let _ = jobs_api
.replace(name, &kube::api::PostParams::default(), &job)
.await?;
}
Err(kube::Error::Api(e)) if e.code == 404 => {
info!(name, ns, "creating migrate job");
let _ = jobs_api.create(&kube::api::PostParams::default(), &job).await?;
}
Err(e) => return Err(e),
}
// Query real Job status via k8s-openapi (reads status subresource)
let job_status = query_job_status(client, ns, name).await?;
patch_migrate_status_from_job(client, ns, name, &job_status).await?;
Ok(())
}
/// Query actual Job status and derive Migrate phase + timestamps.
async fn query_job_status(
client: &kube::Client,
ns: &str,
name: &str,
) -> Result<JobStatusResult, kube::Error> {
use k8s_openapi::api::batch::v1::Job;
let api: kube::Api<Job> = kube::Api::namespaced(client.clone(), ns);
match api.get(name).await {
Ok(job) => {
let status = job.status.as_ref();
let succeeded = status.and_then(|s| s.succeeded).unwrap_or(0);
let failed = status.and_then(|s| s.failed).unwrap_or(0);
let active = status.and_then(|s| s.active).unwrap_or(0);
let phase = if succeeded > 0 {
"Completed"
} else if failed > 0 {
"Failed"
} else if active > 0 {
"Running"
} else {
"Pending"
};
let start_time = status.and_then(|s| s.start_time.as_ref()).map(|t| t.to_string());
let completion_time = status.and_then(|s| s.completion_time.as_ref()).map(|t| t.to_string());
Ok(JobStatusResult { phase, start_time, completion_time })
}
Err(kube::Error::Api(e)) if e.code == 404 => {
Ok(JobStatusResult { phase: "Pending".to_string(), start_time: None, completion_time: None })
}
Err(e) => Err(e),
}
}
struct JobStatusResult {
phase: String,
start_time: Option<String>,
completion_time: Option<String>,
}
async fn patch_migrate_status_from_job(
client: &kube::Client,
ns: &str,
name: &str,
job: &JobStatusResult,
) -> Result<(), kube::Error> {
let api: kube::Api<JsonResource> = kube::Api::namespaced(client.clone(), ns);
let mut status_obj = json!({ "phase": job.phase });
if let Some(ref st) = job.start_time {
status_obj["startTime"] = json!(st);
}
if let Some(ref ct) = job.completion_time {
status_obj["completionTime"] = json!(ct);
}
let patch = json!({ "status": status_obj });
let _ = api
.patch_status(
name,
&kube::api::PatchParams::default(),
&kube::api::Patch::Merge(&patch),
)
.await?;
Ok(())
}
fn build_job(
spec: &MigrateSpec,
meta: K8sObjectMeta,
labels: &std::collections::BTreeMap<String, String>,
) -> JsonResource {
let image = if spec.image.is_empty() {
"myapp/migrate:latest".to_string()
} else {
spec.image.clone()
};
let env = merge_env(&[], &spec.env);
let env_vars: Vec<Value> = env.iter().map(env_var_to_json).collect();
let cmd_parts: Vec<&str> = spec.command.split_whitespace().collect();
let cmd: Vec<&str> = if cmd_parts.is_empty() {
vec!["up"]
} else {
cmd_parts
};
let now = Utc::now().to_rfc3339();
let mut meta_with_anno = meta.clone();
meta_with_anno.annotations = Some(std::collections::BTreeMap::from([(
"code.dev/last-migrate".to_string(),
now,
)]));
let body = json!({
"metadata": meta_with_anno,
"spec": {
"backoffLimit": spec.backoff_limit,
"ttlSecondsAfterFinished": 300,
"template": {
"metadata": {
"labels": labels.clone()
},
"spec": {
"restartPolicy": "Never",
"containers": [{
"name": "migrate",
"image": image,
"command": ["/app/migrate"],
"args": cmd,
"env": env_vars,
"imagePullPolicy": "IfNotPresent"
}]
}
}
}
});
JsonResource::new(meta, body)
}

View File

@ -0,0 +1,188 @@
//! Kubernetes Controllers — one per CRD type.
pub mod app;
pub mod email_worker;
pub mod git_hook;
pub mod gitserver;
pub mod helpers;
pub mod migrate;
use crate::context::ReconcileCtx;
use crate::crd::{App, EmailWorker, GitHook, GitServer, Migrate};
use futures::StreamExt;
use kube::runtime::{Controller, controller::Action};
use std::sync::Arc;
fn error_policy<K: std::fmt::Debug>(
obj: Arc<K>,
err: &kube::Error,
_: Arc<ReconcileCtx>,
) -> Action {
tracing::error!(?obj, %err, "reconcile error");
Action::await_change()
}
/// Start the App controller.
pub async fn start_app(client: kube::Client, ctx: Arc<ReconcileCtx>) -> anyhow::Result<()> {
Controller::new(kube::Api::<App>::all(client.clone()), Default::default())
.owns::<k8s_openapi::api::apps::v1::Deployment>(
kube::Api::all(client.clone()),
Default::default(),
)
.owns::<k8s_openapi::api::core::v1::Service>(
kube::Api::all(client.clone()),
Default::default(),
)
.run(
|o, c| {
let c = c.clone();
async move {
app::reconcile(o, c).await?;
Ok::<_, kube::Error>(Action::await_change())
}
},
error_policy,
ctx.clone(),
)
.for_each(|r| async move {
if let Err(e) = r {
tracing::error!(%e, "app controller stream error");
}
})
.await;
Ok(())
}
/// Start the GitServer controller.
pub async fn start_gitserver(client: kube::Client, ctx: Arc<ReconcileCtx>) -> anyhow::Result<()> {
Controller::new(
kube::Api::<GitServer>::all(client.clone()),
Default::default(),
)
.owns::<k8s_openapi::api::apps::v1::Deployment>(
kube::Api::all(client.clone()),
Default::default(),
)
.owns::<k8s_openapi::api::core::v1::Service>(kube::Api::all(client.clone()), Default::default())
.owns::<k8s_openapi::api::core::v1::PersistentVolumeClaim>(
kube::Api::all(client.clone()),
Default::default(),
)
.run(
|o, c| {
let c = c.clone();
async move {
gitserver::reconcile(o, c).await?;
Ok::<_, kube::Error>(Action::await_change())
}
},
error_policy,
ctx.clone(),
)
.for_each(|r| async move {
if let Err(e) = r {
tracing::error!(%e, "gitserver controller stream error");
}
})
.await;
Ok(())
}
/// Start the EmailWorker controller.
pub async fn start_email_worker(
client: kube::Client,
ctx: Arc<ReconcileCtx>,
) -> anyhow::Result<()> {
Controller::new(
kube::Api::<EmailWorker>::all(client.clone()),
Default::default(),
)
.owns::<k8s_openapi::api::apps::v1::Deployment>(
kube::Api::all(client.clone()),
Default::default(),
)
.run(
|o, c| {
let c = c.clone();
async move {
email_worker::reconcile(o, c).await?;
Ok::<_, kube::Error>(Action::await_change())
}
},
error_policy,
ctx.clone(),
)
.for_each(|r| async move {
if let Err(e) = r {
tracing::error!(%e, "email_worker controller stream error");
}
})
.await;
Ok(())
}
/// Start the GitHook controller.
pub async fn start_git_hook(client: kube::Client, ctx: Arc<ReconcileCtx>) -> anyhow::Result<()> {
Controller::new(
kube::Api::<GitHook>::all(client.clone()),
Default::default(),
)
.owns::<k8s_openapi::api::apps::v1::Deployment>(
kube::Api::all(client.clone()),
Default::default(),
)
.owns::<k8s_openapi::api::core::v1::ConfigMap>(
kube::Api::all(client.clone()),
Default::default(),
)
.run(
|o, c| {
let c = c.clone();
async move {
git_hook::reconcile(o, c).await?;
Ok::<_, kube::Error>(Action::await_change())
}
},
error_policy,
ctx.clone(),
)
.for_each(|r| async move {
if let Err(e) = r {
tracing::error!(%e, "git_hook controller stream error");
}
})
.await;
Ok(())
}
/// Start the Migrate controller.
pub async fn start_migrate(client: kube::Client, ctx: Arc<ReconcileCtx>) -> anyhow::Result<()> {
Controller::new(
kube::Api::<Migrate>::all(client.clone()),
Default::default(),
)
.owns::<k8s_openapi::api::batch::v1::Job>(kube::Api::all(client.clone()), Default::default())
.run(
|o, c| {
let c = c.clone();
async move {
migrate::reconcile(o, c).await?;
Ok::<_, kube::Error>(Action::await_change())
}
},
error_policy,
ctx.clone(),
)
.for_each(|r| async move {
if let Err(e) = r {
tracing::error!(%e, "migrate controller stream error");
}
})
.await;
Ok(())
}

581
apps/operator/src/crd.rs Normal file
View File

@ -0,0 +1,581 @@
//! Custom Resource Definitions (CRDs) — plain serde types.
//!
//! API Group: `code.dev`
//!
//! The operator watches these resources using `kube::Api::<MyCrd>::all(client)`.
//! Reconcile is triggered on every change to any instance of these types.
use k8s_openapi::apimachinery::pkg::apis::meta::v1::{
ObjectMeta, OwnerReference as K8sOwnerReference,
};
use kube::Resource;
use serde::{Deserialize, Serialize};
use std::borrow::Cow;
// ---------------------------------------------------------------------------
// A dynamic Resource impl for serde_json::Value — lets us use kube::Api<Value>
// ---------------------------------------------------------------------------
/// JsonResource wraps serde_json::Value and implements Resource so we can use
/// `kube::Api<JsonResource>` for arbitrary child-resource API calls.
/// The metadata field is kept separate to satisfy the Resource::meta() bound.
#[derive(Clone, Debug, Default)]
pub struct JsonResource {
meta: ObjectMeta,
body: serde_json::Value,
}
impl JsonResource {
pub fn new(meta: ObjectMeta, body: serde_json::Value) -> Self {
JsonResource { meta, body }
}
}
impl std::ops::Deref for JsonResource {
type Target = serde_json::Value;
fn deref(&self) -> &serde_json::Value {
&self.body
}
}
impl serde::Serialize for JsonResource {
fn serialize<S: serde::Serializer>(&self, s: S) -> Result<S::Ok, S::Error> {
self.body.serialize(s)
}
}
impl<'de> serde::Deserialize<'de> for JsonResource {
fn deserialize<D: serde::Deserializer<'de>>(d: D) -> Result<Self, D::Error> {
let body = serde_json::Value::deserialize(d)?;
let meta = body
.get("metadata")
.and_then(|m| serde_json::from_value(m.clone()).ok())
.unwrap_or_default();
Ok(JsonResource { meta, body })
}
}
impl Resource for JsonResource {
type DynamicType = ();
type Scope = k8s_openapi::NamespaceResourceScope;
fn kind(_: &()) -> Cow<'_, str> {
Cow::Borrowed("Object")
}
fn group(_: &()) -> Cow<'_, str> {
Cow::Borrowed("")
}
fn version(_: &()) -> Cow<'_, str> {
Cow::Borrowed("v1")
}
fn plural(_: &()) -> Cow<'_, str> {
Cow::Borrowed("objects")
}
fn meta(&self) -> &ObjectMeta {
&self.meta
}
fn meta_mut(&mut self) -> &mut ObjectMeta {
&mut self.meta
}
}
// ---------------------------------------------------------------------------
// Shared types
// ---------------------------------------------------------------------------
/// EnvVar with optional secret reference.
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct EnvVar {
pub name: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub value: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub value_from: Option<EnvVarSource>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct EnvVarSource {
#[serde(skip_serializing_if = "Option::is_none")]
pub secret_ref: Option<SecretEnvVar>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct SecretEnvVar {
pub name: String,
pub secret_name: String,
pub secret_key: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ResourceRequirements {
#[serde(skip_serializing_if = "Option::is_none")]
pub requests: Option<ResourceList>,
#[serde(skip_serializing_if = "Option::is_none")]
pub limits: Option<ResourceList>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct ResourceList {
#[serde(skip_serializing_if = "Option::is_none")]
pub cpu: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub memory: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct Probe {
#[serde(default = "default_port")]
pub port: i32,
#[serde(default = "default_path")]
pub path: String,
#[serde(default = "default_initial_delay")]
pub initial_delay_seconds: i32,
}
fn default_port() -> i32 {
8080
}
fn default_path() -> String {
"/health".to_string()
}
fn default_initial_delay() -> i32 {
5
}
// ---------------------------------------------------------------------------
// App CRD
// ---------------------------------------------------------------------------
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AppSpec {
#[serde(default = "default_app_image")]
pub image: String,
#[serde(default = "default_replicas")]
pub replicas: i32,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub env: Vec<EnvVar>,
#[serde(skip_serializing_if = "Option::is_none")]
pub resources: Option<ResourceRequirements>,
#[serde(skip_serializing_if = "Option::is_none")]
pub liveness_probe: Option<Probe>,
#[serde(skip_serializing_if = "Option::is_none")]
pub readiness_probe: Option<Probe>,
#[serde(default)]
pub image_pull_policy: String,
}
fn default_app_image() -> String {
"myapp/app:latest".to_string()
}
fn default_replicas() -> i32 {
3
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct AppStatus {
#[serde(skip_serializing_if = "Option::is_none")]
pub ready_replicas: Option<i32>,
#[serde(skip_serializing_if = "Option::is_none")]
pub phase: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct App {
pub api_version: String,
pub kind: String,
pub metadata: K8sObjectMeta,
pub spec: AppSpec,
#[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<AppStatus>,
}
impl App {
pub fn api_group() -> &'static str {
"code.dev"
}
pub fn version() -> &'static str {
"v1"
}
pub fn plural() -> &'static str {
"apps"
}
}
impl Resource for App {
type DynamicType = ();
type Scope = k8s_openapi::NamespaceResourceScope;
fn kind(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("App")
}
fn group(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("code.dev")
}
fn version(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("v1")
}
fn plural(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("apps")
}
fn meta(&self) -> &ObjectMeta {
&self.metadata
}
fn meta_mut(&mut self) -> &mut ObjectMeta {
&mut self.metadata
}
}
// ---------------------------------------------------------------------------
// GitServer CRD
// ---------------------------------------------------------------------------
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GitServerSpec {
#[serde(default = "default_gitserver_image")]
pub image: String,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub env: Vec<EnvVar>,
#[serde(skip_serializing_if = "Option::is_none")]
pub resources: Option<ResourceRequirements>,
#[serde(default = "default_ssh_service_type")]
pub ssh_service_type: String,
#[serde(default = "default_storage_size")]
pub storage_size: String,
#[serde(default)]
pub image_pull_policy: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub ssh_domain: Option<String>,
#[serde(default = "default_ssh_port")]
pub ssh_port: i32,
#[serde(default = "default_http_port")]
pub http_port: i32,
}
fn default_gitserver_image() -> String {
"myapp/gitserver:latest".to_string()
}
fn default_ssh_service_type() -> String {
"NodePort".to_string()
}
fn default_storage_size() -> String {
"10Gi".to_string()
}
fn default_ssh_port() -> i32 {
22
}
fn default_http_port() -> i32 {
8022
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct GitServerStatus {
#[serde(skip_serializing_if = "Option::is_none")]
pub ready_replicas: Option<i32>,
#[serde(skip_serializing_if = "Option::is_none")]
pub phase: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GitServer {
pub api_version: String,
pub kind: String,
pub metadata: K8sObjectMeta,
pub spec: GitServerSpec,
#[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<GitServerStatus>,
}
impl GitServer {
pub fn api_group() -> &'static str {
"code.dev"
}
pub fn version() -> &'static str {
"v1"
}
pub fn plural() -> &'static str {
"gitservers"
}
}
impl Resource for GitServer {
type DynamicType = ();
type Scope = k8s_openapi::NamespaceResourceScope;
fn kind(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("GitServer")
}
fn group(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("code.dev")
}
fn version(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("v1")
}
fn plural(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("gitservers")
}
fn meta(&self) -> &ObjectMeta {
&self.metadata
}
fn meta_mut(&mut self) -> &mut ObjectMeta {
&mut self.metadata
}
}
// ---------------------------------------------------------------------------
// EmailWorker CRD
// ---------------------------------------------------------------------------
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EmailWorkerSpec {
#[serde(default = "default_email_image")]
pub image: String,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub env: Vec<EnvVar>,
#[serde(skip_serializing_if = "Option::is_none")]
pub resources: Option<ResourceRequirements>,
#[serde(default)]
pub image_pull_policy: String,
}
fn default_email_image() -> String {
"myapp/email-worker:latest".to_string()
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct EmailWorkerStatus {
#[serde(skip_serializing_if = "Option::is_none")]
pub ready_replicas: Option<i32>,
#[serde(skip_serializing_if = "Option::is_none")]
pub phase: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EmailWorker {
pub api_version: String,
pub kind: String,
pub metadata: K8sObjectMeta,
pub spec: EmailWorkerSpec,
#[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<EmailWorkerStatus>,
}
impl EmailWorker {
pub fn api_group() -> &'static str {
"code.dev"
}
pub fn version() -> &'static str {
"v1"
}
pub fn plural() -> &'static str {
"emailworkers"
}
}
impl Resource for EmailWorker {
type DynamicType = ();
type Scope = k8s_openapi::NamespaceResourceScope;
fn kind(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("EmailWorker")
}
fn group(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("code.dev")
}
fn version(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("v1")
}
fn plural(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("emailworkers")
}
fn meta(&self) -> &ObjectMeta {
&self.metadata
}
fn meta_mut(&mut self) -> &mut ObjectMeta {
&mut self.metadata
}
}
// ---------------------------------------------------------------------------
// GitHook CRD
// ---------------------------------------------------------------------------
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GitHookSpec {
#[serde(default = "default_githook_image")]
pub image: String,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub env: Vec<EnvVar>,
#[serde(skip_serializing_if = "Option::is_none")]
pub resources: Option<ResourceRequirements>,
#[serde(default)]
pub image_pull_policy: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub worker_id: Option<String>,
}
fn default_githook_image() -> String {
"myapp/git-hook:latest".to_string()
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct GitHookStatus {
#[serde(skip_serializing_if = "Option::is_none")]
pub ready_replicas: Option<i32>,
#[serde(skip_serializing_if = "Option::is_none")]
pub phase: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GitHook {
pub api_version: String,
pub kind: String,
pub metadata: K8sObjectMeta,
pub spec: GitHookSpec,
#[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<GitHookStatus>,
}
impl GitHook {
pub fn api_group() -> &'static str {
"code.dev"
}
pub fn version() -> &'static str {
"v1"
}
pub fn plural() -> &'static str {
"githooks"
}
}
impl Resource for GitHook {
type DynamicType = ();
type Scope = k8s_openapi::NamespaceResourceScope;
fn kind(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("GitHook")
}
fn group(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("code.dev")
}
fn version(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("v1")
}
fn plural(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("githooks")
}
fn meta(&self) -> &ObjectMeta {
&self.metadata
}
fn meta_mut(&mut self) -> &mut ObjectMeta {
&mut self.metadata
}
}
// ---------------------------------------------------------------------------
// Migrate CRD
// ---------------------------------------------------------------------------
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct MigrateSpec {
#[serde(default = "default_migrate_image")]
pub image: String,
#[serde(default, skip_serializing_if = "Vec::is_empty")]
pub env: Vec<EnvVar>,
#[serde(default = "default_migrate_cmd")]
pub command: String,
#[serde(default = "default_backoff_limit")]
pub backoff_limit: i32,
}
fn default_migrate_image() -> String {
"myapp/migrate:latest".to_string()
}
fn default_migrate_cmd() -> String {
"up".to_string()
}
fn default_backoff_limit() -> i32 {
3
}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct MigrateStatus {
#[serde(skip_serializing_if = "Option::is_none")]
pub phase: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub start_time: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub completion_time: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Migrate {
pub api_version: String,
pub kind: String,
pub metadata: K8sObjectMeta,
pub spec: MigrateSpec,
#[serde(skip_serializing_if = "Option::is_none")]
pub status: Option<MigrateStatus>,
}
impl Migrate {
pub fn api_group() -> &'static str {
"code.dev"
}
pub fn version() -> &'static str {
"v1"
}
pub fn plural() -> &'static str {
"migrates"
}
}
impl Resource for Migrate {
type DynamicType = ();
type Scope = k8s_openapi::NamespaceResourceScope;
fn kind(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("Migrate")
}
fn group(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("code.dev")
}
fn version(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("v1")
}
fn plural(_: &Self::DynamicType) -> Cow<'_, str> {
Cow::Borrowed("migrates")
}
fn meta(&self) -> &ObjectMeta {
&self.metadata
}
fn meta_mut(&mut self) -> &mut ObjectMeta {
&mut self.metadata
}
}
// ---------------------------------------------------------------------------
// Shared K8s types — aligned with k8s-openapi for Resource trait compatibility
// ---------------------------------------------------------------------------
/// Type alias so K8sObjectMeta satisfies Resource::meta() -> &k8s_openapi::...::ObjectMeta.
pub type K8sObjectMeta = ObjectMeta;
/// OwnerReference compatible with k8s-openapi.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OwnerReference {
pub api_version: String,
pub kind: String,
pub name: String,
pub uid: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub controller: Option<bool>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub block_owner_deletion: Option<bool>,
}
impl From<OwnerReference> for K8sOwnerReference {
fn from(o: OwnerReference) -> Self {
K8sOwnerReference {
api_version: o.api_version,
kind: o.kind,
name: o.name,
uid: o.uid,
controller: o.controller,
block_owner_deletion: o.block_owner_deletion,
}
}
}

3
apps/operator/src/lib.rs Normal file
View File

@ -0,0 +1,3 @@
pub mod context;
pub mod controller;
pub mod crd;

100
apps/operator/src/main.rs Normal file
View File

@ -0,0 +1,100 @@
//! Code System Kubernetes Operator
//!
//! Manages the lifecycle of: App, GitServer, EmailWorker, GitHook, Migrate CRDs.
use operator::context::ReconcileCtx;
use std::sync::Arc;
use tracing::{Level, error, info};
use tracing_subscriber::FmtSubscriber;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// ---- Logging ----
let log_level = std::env::var("OPERATOR_LOG_LEVEL").unwrap_or_else(|_| "info".to_string());
let level = match log_level.to_lowercase().as_str() {
"trace" => Level::TRACE,
"debug" => Level::DEBUG,
"info" => Level::INFO,
"warn" => Level::WARN,
"error" => Level::ERROR,
_ => Level::INFO,
};
FmtSubscriber::builder()
.with_max_level(level)
.with_target(false)
.with_thread_ids(false)
.with_file(true)
.with_line_number(true)
.compact()
.init();
let ctx = Arc::new(ReconcileCtx::from_env().await?);
info!(
namespace = ctx.operator_namespace,
image_prefix = ctx.image_prefix,
"code-operator starting"
);
// ---- Spawn all 5 controllers ----
let app_handle = tokio::spawn({
let ctx = ctx.clone();
let client = ctx.client.clone();
async move {
use operator::controller;
if let Err(e) = controller::start_app(client, ctx).await {
error!(%e, "app controller stopped");
}
}
});
let gs_handle = tokio::spawn({
let ctx = ctx.clone();
let client = ctx.client.clone();
async move {
use operator::controller;
if let Err(e) = controller::start_gitserver(client, ctx).await {
error!(%e, "gitserver controller stopped");
}
}
});
let ew_handle = tokio::spawn({
let ctx = ctx.clone();
let client = ctx.client.clone();
async move {
use operator::controller;
if let Err(e) = controller::start_email_worker(client, ctx).await {
error!(%e, "email_worker controller stopped");
}
}
});
let gh_handle = tokio::spawn({
let ctx = ctx.clone();
let client = ctx.client.clone();
async move {
use operator::controller;
if let Err(e) = controller::start_git_hook(client, ctx).await {
error!(%e, "git_hook controller stopped");
}
}
});
let mig_handle = tokio::spawn({
let ctx = ctx.clone();
let client = ctx.client.clone();
async move {
use operator::controller;
if let Err(e) = controller::start_migrate(client, ctx).await {
error!(%e, "migrate controller stopped");
}
}
});
// ---- Graceful shutdown on SIGINT / SIGTERM ----
tokio::signal::ctrl_c().await.ok();
info!("code-operator stopped");
let _ = tokio::join!(app_handle, gs_handle, ew_handle, gh_handle, mig_handle,);
Ok(())
}

25
components.json Normal file
View File

@ -0,0 +1,25 @@
{
"$schema": "https://ui.shadcn.com/schema.json",
"style": "base-nova",
"rsc": false,
"tsx": true,
"tailwind": {
"config": "",
"css": "src/index.css",
"baseColor": "neutral",
"cssVariables": true,
"prefix": ""
},
"iconLibrary": "lucide",
"rtl": false,
"aliases": {
"components": "@/components",
"utils": "@/lib/utils",
"ui": "@/components/ui",
"lib": "@/lib",
"hooks": "@/hooks"
},
"menuColor": "default",
"menuAccent": "subtle",
"registries": {}
}

13
deploy/Chart.yaml Normal file
View File

@ -0,0 +1,13 @@
apiVersion: v2
name: c-----code
description: Self-hosted GitHub + Slack alternative platform
type: application
version: 0.1.0
appVersion: "0.1.0"
keywords:
- git
- collaboration
- self-hosted
maintainers:
- name: C-----code Team
email: team@c.dev

View File

@ -0,0 +1,35 @@
{{/* Helm NOTES.txt shown after install/upgrade */}}
{{- if .Release.IsInstall }}
🎉 {{ .Chart.Name }} {{ .Chart.Version }} installed in namespace {{ .Release.Namespace }}.
⚠️ Prerequisites you must fulfil before the app starts:
1. PostgreSQL database is reachable.
2. Redis is reachable.
3. (Optional) NATS if HOOK_POOL is enabled.
4. (Optional) Qdrant if AI embeddings are used.
📋 Required Secret "{{ .Release.Name }}-secrets" (create manually or via external secrets):
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-secrets
namespace: {{ .Release.Namespace }}
type: Opaque
stringData:
APP_DATABASE_URL: postgresql://user:password@postgres:5432/db
APP_REDIS_URL: redis://redis:6379
# APP_SMTP_PASSWORD: ...
# APP_QDRANT_API_KEY: ...
Or set .Values.secrets in values.yaml.
🔄 To run database migrations:
helm upgrade {{ .Release.Name }} ./c-----code -n {{ .Release.Namespace }} \
--set migrate.enabled=true
📖 Useful commands:
kubectl get pods -n {{ .Release.Namespace }}
kubectl logs -n {{ .Release.Namespace }} -l app.kubernetes.io/name={{ .Chart.Name }}
{{- end }}

View File

@ -0,0 +1,44 @@
{{/* =============================================================================
Common helpers
============================================================================= */}}
{{- define "c-----code.fullname" -}}
{{- .Release.Name -}}
{{- end -}}
{{- define "c-----code.namespace" -}}
{{- .Values.namespace | default .Release.Namespace -}}
{{- end -}}
{{- define "c-----code.image" -}}
{{- $registry := .Values.image.registry -}}
{{- $pullPolicy := .Values.image.pullPolicy -}}
{{- printf "%s/%s:%s" $registry .image.repository .image.tag -}}
{{- end -}}
{{/* Inject image pull policy into sub-chart image dict */}}
{{- define "c-----code.mergeImage" -}}
{{- $merged := dict "pullPolicy" $.Values.image.pullPolicy -}}
{{- $merged = merge $merged .image -}}
{{- printf "%s/%s:%s" $.Values.image.registry $merged.repository $merged.tag -}}
{{- end -}}
{{/* Build a key-value env var list, optionally reading from a Secret */}}
{{- define "c-----code.envFromSecret" -}}
{{- $secretName := .existingSecret -}}
{{- $keys := .secretKeys -}}
{{- $result := list -}}
{{- range $envName, $secretKey := $keys -}}
{{- $item := dict "name" $envName "valueFrom" (dict "secretKeyRef" (dict "name" $secretName "key" $secretKey)) -}}
{{- $result = append $result $item -}}
{{- end -}}
{{- $result | toJson | fromJson -}}
{{- end -}}
{{/* Merge two env lists (extra env over auto-injected) */}}
{{- define "c-----code.mergeEnv" -}}
{{- $auto := .auto -}}
{{- $extra := .extra | default list -}}
{{- $merged := append $auto $extra | toJson | fromJson -}}
{{- $merged | toYaml -}}
{{- end -}}

View File

@ -0,0 +1,111 @@
{{- if .Values.app.enabled -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "c-----code.fullname" . }}-app
namespace: {{ include "c-----code.namespace" . }}
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-app
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
spec:
replicas: {{ .Values.app.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-app
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-app
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: app
image: "{{ .Values.image.registry }}/{{ .Values.app.image.repository }}:{{ .Values.app.image.tag }}"
imagePullPolicy: {{ .Values.app.image.pullPolicy | default .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.app.service.port }}
protocol: TCP
env:
- name: APP_DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ .Values.database.existingSecret | default (printf "%s-secrets" (include "c-----code.fullname" .)) }}
key: {{ .Values.database.secretKeys.url }}
optional: true
- name: APP_REDIS_URL
valueFrom:
secretKeyRef:
name: {{ .Values.redis.existingSecret | default (printf "%s-secrets" (include "c-----code.fullname" .)) }}
key: {{ .Values.redis.secretKeys.url }}
optional: true
{{- if .Values.nats.enabled }}
- name: HOOK_POOL_REDIS_LIST_PREFIX
value: "{hook}"
- name: HOOK_POOL_REDIS_LOG_CHANNEL
value: "hook:logs"
{{- end }}
{{- if .Values.qdrant.enabled }}
- name: APP_QDRANT_URL
value: {{ .Values.qdrant.url }}
{{- if and .Values.qdrant.existingSecret .Values.qdrant.secretKeys.apiKey }}
- name: APP_QDRANT_API_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.qdrant.existingSecret }}
key: {{ .Values.qdrant.secretKeys.apiKey }}
optional: true
{{- end }}
{{- end }}
{{- range .Values.app.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
livenessProbe:
httpGet:
path: {{ .Values.app.livenessProbe.path }}
port: {{ .Values.app.livenessProbe.port }}
initialDelaySeconds: {{ .Values.app.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.app.livenessProbe.periodSeconds }}
readinessProbe:
httpGet:
path: {{ .Values.app.readinessProbe.path }}
port: {{ .Values.app.readinessProbe.port }}
initialDelaySeconds: {{ .Values.app.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.app.readinessProbe.periodSeconds }}
resources:
{{- toYaml .Values.app.resources | nindent 10 }}
{{- with .Values.app.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.app.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.app.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "c-----code.fullname" . }}-app
namespace: {{ include "c-----code.namespace" . }}
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-app
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
type: {{ .Values.app.service.type }}
ports:
- port: {{ .Values.app.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-app
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "c-----code.fullname" . }}-config
namespace: {{ include "c-----code.namespace" . }}
labels:
app.kubernetes.io/name: {{ .Chart.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
data:
{{- if .Values.app.config }}
{{- range $key, $value := .Values.app.config }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,58 @@
{{- if .Values.emailWorker.enabled -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "c-----code.fullname" . }}-email-worker
namespace: {{ include "c-----code.namespace" . }}
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-email-worker
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-email-worker
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-email-worker
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: email-worker
image: "{{ .Values.image.registry }}/{{ .Values.emailWorker.image.repository }}:{{ .Values.emailWorker.image.tag }}"
imagePullPolicy: {{ .Values.emailWorker.image.pullPolicy | default .Values.image.pullPolicy }}
env:
- name: APP_DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ .Values.database.existingSecret | default (printf "%s-secrets" (include "c-----code.fullname" .)) }}
key: {{ .Values.database.secretKeys.url }}
optional: true
- name: APP_REDIS_URL
valueFrom:
secretKeyRef:
name: {{ .Values.redis.existingSecret | default (printf "%s-secrets" (include "c-----code.fullname" .)) }}
key: {{ .Values.redis.secretKeys.url }}
optional: true
{{- range .Values.emailWorker.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
resources:
{{- toYaml .Values.emailWorker.resources | nindent 10 }}
{{- with .Values.emailWorker.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.emailWorker.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.emailWorker.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,64 @@
{{- if .Values.gitHook.enabled -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "c-----code.fullname" . }}-git-hook
namespace: {{ include "c-----code.namespace" . }}
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-git-hook
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
spec:
replicas: {{ .Values.gitHook.replicaCount | default 2 }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-git-hook
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-git-hook
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: git-hook
image: "{{ .Values.image.registry }}/{{ .Values.gitHook.image.repository }}:{{ .Values.gitHook.image.tag }}"
imagePullPolicy: {{ .Values.gitHook.image.pullPolicy | default .Values.image.pullPolicy }}
env:
- name: APP_DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ .Values.database.existingSecret | default (printf "%s-secrets" (include "c-----code.fullname" .)) }}
key: {{ .Values.database.secretKeys.url }}
optional: true
- name: APP_REDIS_URL
valueFrom:
secretKeyRef:
name: {{ .Values.redis.existingSecret | default (printf "%s-secrets" (include "c-----code.fullname" .)) }}
key: {{ .Values.redis.secretKeys.url }}
optional: true
{{- if .Values.nats.enabled }}
- name: HOOK_POOL_REDIS_LIST_PREFIX
value: "{hook}"
- name: HOOK_POOL_REDIS_LOG_CHANNEL
value: "hook:logs"
{{- end }}
{{- range .Values.gitHook.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
resources:
{{- toYaml .Values.gitHook.resources | nindent 10 }}
{{- with .Values.gitHook.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.gitHook.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.gitHook.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,162 @@
{{- if .Values.gitserver.enabled -}}
{{- $fullName := include "c-----code.fullname" . -}}
{{- $ns := include "c-----code.namespace" . -}}
{{- $svc := .Values.gitserver -}}
{{/* PersistentVolumeClaim for git repositories */}}
{{- if $svc.persistence.enabled }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ $fullName }}-repos
namespace: {{ $ns }}
labels:
app.kubernetes.io/name: {{ $fullName }}-gitserver
app.kubernetes.io/instance: {{ $.Release.Name }}
spec:
accessModes:
- {{ $svc.persistence.accessMode | default "ReadWriteOnce" }}
resources:
requests:
storage: {{ $svc.persistence.size }}
{{- if $svc.persistence.storageClass }}
storageClassName: {{ $svc.persistence.storageClass }}
{{- end }}
{{- end }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $fullName }}-gitserver
namespace: {{ $ns }}
labels:
app.kubernetes.io/name: {{ $fullName }}-gitserver
app.kubernetes.io/instance: {{ $.Release.Name }}
app.kubernetes.io/version: {{ $.Chart.AppVersion }}
spec:
replicas: {{ $svc.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ $fullName }}-gitserver
app.kubernetes.io/instance: {{ $.Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ $fullName }}-gitserver
app.kubernetes.io/instance: {{ $.Release.Name }}
spec:
containers:
- name: gitserver
image: "{{ $.Values.image.registry }}/{{ $svc.image.repository }}:{{ $svc.image.tag }}"
imagePullPolicy: {{ $svc.image.pullPolicy | default $.Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ $svc.service.http.port }}
protocol: TCP
- name: ssh
containerPort: {{ $svc.ssh.port }}
protocol: TCP
env:
- name: APP_REPOS_ROOT
value: /data/repos
- name: APP_DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ $.Values.database.existingSecret | default (printf "%s-secrets" $fullName) }}
key: {{ $.Values.database.secretKeys.url }}
optional: true
- name: APP_REDIS_URL
valueFrom:
secretKeyRef:
name: {{ $.Values.redis.existingSecret | default (printf "%s-secrets" $fullName) }}
key: {{ $.Values.redis.secretKeys.url }}
optional: true
{{- if $svc.ssh.domain }}
- name: APP_SSH_DOMAIN
value: {{ $svc.ssh.domain }}
{{- end }}
{{- if $svc.ssh.port }}
- name: APP_SSH_PORT
value: {{ $svc.ssh.port | quote }}
{{- end }}
{{- range $svc.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
resources:
{{- toYaml $svc.resources | nindent 10 }}
volumeMounts:
{{- if $svc.persistence.enabled }}
- name: repos
mountPath: /data/repos
{{- end }}
volumes:
{{- if $svc.persistence.enabled }}
- name: repos
persistentVolumeClaim:
claimName: {{ $fullName }}-repos
{{- end }}
{{- with $svc.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $svc.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $svc.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
---
# HTTP service (git smart HTTP)
apiVersion: v1
kind: Service
metadata:
name: {{ $fullName }}-gitserver-http
namespace: {{ $ns }}
labels:
app.kubernetes.io/name: {{ $fullName }}-gitserver
app.kubernetes.io/instance: {{ $.Release.Name }}
spec:
type: {{ $svc.service.http.type }}
ports:
- name: http
port: {{ $svc.service.http.port }}
targetPort: http
protocol: TCP
selector:
app.kubernetes.io/name: {{ $fullName }}-gitserver
app.kubernetes.io/instance: {{ $.Release.Name }}
---
# SSH service (git over SSH)
apiVersion: v1
kind: Service
metadata:
name: {{ $fullName }}-gitserver-ssh
namespace: {{ $ns }}
labels:
app.kubernetes.io/name: {{ $fullName }}-gitserver
app.kubernetes.io/instance: {{ $.Release.Name }}
spec:
type: {{ $svc.service.ssh.type }}
{{- if eq $svc.service.ssh.type "NodePort" }}
ports:
- name: ssh
port: {{ $svc.ssh.port }}
targetPort: ssh
nodePort: {{ $svc.service.ssh.nodePort }}
{{- else }}
ports:
- name: ssh
port: {{ $svc.ssh.port }}
targetPort: ssh
{{- end }}
selector:
app.kubernetes.io/name: {{ $fullName }}-gitserver
app.kubernetes.io/instance: {{ $.Release.Name }}
{{- end }}

View File

@ -0,0 +1,46 @@
{{- if .Values.app.ingress.enabled -}}
{{- $svcName := printf "%s-app" (include "c-----code.fullname" .) -}}
{{- $ns := include "c-----code.namespace" . -}}
{{- $ing := .Values.app.ingress -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "c-----code.fullname" . }}-ingress
namespace: {{ $ns }}
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-app
app.kubernetes.io/instance: {{ .Release.Name }}
{{- with $ing.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if $ing.className }}
ingressClassName: {{ $ing.className }}
{{- end }}
{{- if $ing.tls }}
tls:
{{- range $ing.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range $ing.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType | default "Prefix" }}
backend:
service:
name: {{ $svcName }}
port:
number: {{ $.Values.app.service.port }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,42 @@
{{- if .Values.migrate.enabled -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "c-----code.fullname" . }}-migrate
namespace: {{ include "c-----code.namespace" . }}
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-migrate
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation
spec:
backoffLimit: {{ .Values.migrate.backoffLimit }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-migrate
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
restartPolicy: OnFailure
containers:
- name: migrate
image: "{{ .Values.image.registry }}/{{ .Values.migrate.image.repository }}:{{ .Values.migrate.image.tag }}"
imagePullPolicy: {{ .Values.migrate.image.pullPolicy | default .Values.image.pullPolicy }}
command:
{{- if .Values.migrate.command }}
- {{ .Values.migrate.command }}
{{- else }}
- up
{{- end }}
env:
- name: APP_DATABASE_URL
valueFrom:
secretKeyRef:
name: {{ .Values.database.existingSecret | default (printf "%s-secrets" (include "c-----code.fullname" .)) }}
key: {{ .Values.database.secretKeys.url }}
{{- range .Values.migrate.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,52 @@
{{- if .Values.operator.enabled -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "c-----code.fullname" . }}-operator
namespace: {{ include "c-----code.namespace" . }}
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-operator
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-operator
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-operator
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
serviceAccountName: {{ include "c-----code.fullname" . }}-operator
containers:
- name: operator
image: "{{ .Values.image.registry }}/{{ .Values.operator.image.repository }}:{{ .Values.operator.image.tag }}"
imagePullPolicy: {{ .Values.operator.image.pullPolicy | default .Values.image.pullPolicy }}
resources:
{{- toYaml .Values.operator.resources | nindent 10 }}
{{- with .Values.operator.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.operator.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.operator.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "c-----code.fullname" . }}-operator
namespace: {{ include "c-----code.namespace" . }}
labels:
app.kubernetes.io/name: {{ include "c-----code.fullname" . }}-operator
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

View File

@ -0,0 +1,17 @@
{{- /* Template for bootstrap secrets replace with external secret manager in prod */ -}}
{{- if .Values.secrets }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "c-----code.fullname" . }}-secrets
namespace: {{ include "c-----code.namespace" . }}
labels:
app.kubernetes.io/name: {{ .Chart.Name }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
type: Opaque
stringData:
{{- range $key, $value := .Values.secrets }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}

262
deploy/values.yaml Normal file
View File

@ -0,0 +1,262 @@
# =============================================================================
# Global / common settings
# =============================================================================
namespace: c-----code
releaseName: c-----code
image:
registry: harbor.gitdata.me/gta_team
pullPolicy: IfNotPresent
# PostgreSQL (required) set connection string via secret or values
database:
existingSecret: ""
secretKeys:
url: APP_DATABASE_URL
# Redis (required)
redis:
existingSecret: ""
secretKeys:
url: APP_REDIS_URL
# NATS (optional required only if HOOK_POOL is enabled)
nats:
enabled: false
url: nats://nats:4222
# Qdrant (optional required only if AI embeddings are used)
qdrant:
enabled: false
url: http://qdrant:6333
existingSecret: ""
secretKeys:
apiKey: APP_QDRANT_API_KEY
# =============================================================================
# App main web/API service
# =============================================================================
app:
enabled: true
replicaCount: 3
image:
repository: app
tag: latest
service:
type: ClusterIP
port: 8080
ingress:
enabled: false
className: cilium # Cilium Ingress (or envoy for EnvoyGateway)
annotations: {}
hosts:
- host: c-----.local
paths:
- path: /
pathType: Prefix
tls: []
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 1000m
memory: 1Gi
livenessProbe:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
# Extra env vars (merge with auto-injected ones)
env: []
nodeSelector: {}
tolerations: []
affinity: {}
# =============================================================================
# Gitserver git daemon / SSH + HTTP server
# =============================================================================
gitserver:
enabled: true
replicaCount: 1
image:
repository: gitserver
tag: latest
service:
http:
type: ClusterIP
port: 8022
ssh:
type: NodePort
nodePort: 30222
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
# Storage for git repos
persistence:
enabled: true
storageClass: ""
size: 50Gi
accessMode: ReadWriteOnce
ssh:
domain: ""
port: 22
env: []
nodeSelector: {}
tolerations: []
affinity: {}
# =============================================================================
# Email worker processes outgoing email queue
# =============================================================================
emailWorker:
enabled: true
image:
repository: email-worker
tag: latest
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 256Mi
env: []
nodeSelector: {}
tolerations: []
affinity: {}
# =============================================================================
# Git hook pool handles pre-receive / post-receive hooks
# =============================================================================
gitHook:
enabled: true
image:
repository: git-hook
tag: latest
replicaCount: 2
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 256Mi
env: []
nodeSelector: {}
tolerations: []
affinity: {}
# =============================================================================
# Migrate database migration Job (runOnce)
# =============================================================================
migrate:
enabled: false # Set true to run migrations on upgrade
image:
repository: migrate
tag: latest
command: up
backoffLimit: 3
env: []
# =============================================================================
# Operator Kubernetes operator (manages custom App/GitServer CRDs)
# =============================================================================
operator:
enabled: false # Enable only if running the custom operator
image:
repository: operator
tag: latest
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 256Mi
nodeSelector: {}
tolerations: []
affinity: {}
# =============================================================================
# Act Runner Gitea Actions self-hosted runner
# =============================================================================
actRunner:
enabled: false
image:
repository: act-runner
tag: latest
replicaCount: 2
# Concurrency per runner instance
capacity: 2
# Runner labels (must match workflow `runs-on`)
labels:
- gitea
- docker
logLevel: info
cache:
enabled: true
dir: /tmp/actions-cache
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 2000m
memory: 4Gi
env: []
nodeSelector: {}
tolerations:
- key: "runner"
operator: "Equal"
value: "true"
effect: "NoSchedule"
affinity: {}

41
docker/app.Dockerfile Normal file
View File

@ -0,0 +1,41 @@
# ---- Stage 1: Build ----
FROM rust:1.94-bookworm AS builder
ARG BUILD_TARGET=x86_64-unknown-linux-gnu
ENV TARGET=${BUILD_TARGET}
# Build dependencies: OpenSSL, libgit2, zlib, clang for sea-orm codegen
RUN apt-get update && apt-get install -y --no-install-recommends \
pkg-config libssl-dev libclang-dev \
gcc g++ make \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
# Copy workspace manifests
COPY Cargo.toml Cargo.lock ./
COPY libs/ libs/
COPY apps/app/ apps/app/
# Pre-build dependencies only
RUN cargo fetch
# Build the binary
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git \
--mount=type=cache,target=target \
cargo build --release --package app --target ${TARGET}
# ---- Stage 2: Runtime ----
FROM debian:bookworm-slim AS runtime
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates libssl3 \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /build/target/${TARGET}/release/app /app/app
# All config via environment variables (APP_* prefix)
ENV APP_LOG_LEVEL=info
ENTRYPOINT ["/app/app"]

171
docker/build.md Normal file
View File

@ -0,0 +1,171 @@
# Docker 构建指南
## 前提条件
- Docker 20.10+
- Cargo.lock 已存在(`cargo generate-lockfile`
- 网络能够访问 crates.io
## 快速开始
```bash
# 构建全部镜像(默认 registry=myapp, tag=latest
./docker/build.sh
# 构建指定镜像
./docker/build.sh app
./docker/build.sh gitserver email-worker
# 指定 registry 和 tag
REGISTRY=myregistry TAG=v1.0.0 ./docker/build.sh
```
## 镜像列表
| 镜像 | Dockerfile | 二进制 | 实例类型 | 说明 |
|---|---|---|---|---|
| `myapp/app:latest` | `app.Dockerfile` | `app` | 多实例 | 主 Web 服务API + WS |
| `myapp/gitserver:latest` | `gitserver.Dockerfile` | `gitserver` | 单实例 | Git HTTP + SSH 服务 |
| `myapp/email-worker:latest` | `email-worker.Dockerfile` | `email-worker` | 单实例 | 邮件发送 Worker |
| `myapp/git-hook:latest` | `git-hook.Dockerfile` | `git-hook` | 单实例 | Git Hook 事件处理 |
| `myapp/migrate:latest` | `migrate.Dockerfile` | `migrate` | Job/InitContainer | 数据库迁移 CLI |
## 部署架构
```
┌─ NATS ─┐
│ │
┌─────────┐ ┌──────────────┐ ┌─────────────────┐
│ LB/ │───▶│ app (×N) │ │ git-hook │
│ nginx │ │ (stateless) │ │ (单实例) │
└─────────┘ └──────────────┘ └─────────────────┘
┌──────────────┐
│ gitserver │
│ (单实例) │ ┌─────────────────┐
│ HTTP :8022 │───▶│ email-worker │
│ SSH :2222 │ │ (单实例) │
└──────────────┘ └─────────────────┘
```
## 环境变量
所有配置通过环境变量注入,无需修改镜像:
| 变量 | 示例 | 说明 |
|---|---|---|
| `APP_DATABASE_URL` | `postgres://user:pass@host:5432/db` | 数据库连接 |
| `APP_REDIS_URLS` | `redis://host:6379` | Redis多实例用逗号分隔 |
| `APP_SMTP_HOST` | `smtp.example.com` | SMTP 服务器 |
| `APP_SMTP_USERNAME` | `noreply@example.com` | SMTP 用户名 |
| `APP_SMTP_PASSWORD` | `xxx` | SMTP 密码 |
| `APP_SMTP_FROM` | `noreply@example.com` | 发件人地址 |
| `APP_AI_BASIC_URL` | `https://api.openai.com/v1` | AI API 地址 |
| `APP_AI_API_KEY` | `sk-xxx` | AI API Key |
| `APP_DOMAIN_URL` | `https://example.com` | 主域名 |
| `APP_LOG_LEVEL` | `info` | 日志级别: trace/debug/info/warn/error |
| `APP_SSH_DOMAIN` | `git.example.com` | Git SSH 域名 |
| `APP_REPOS_ROOT` | `/data/repos` | Git 仓库存储路径 |
| `NATS_URL` | `nats://localhost:4222` | NATS 服务器地址 |
## 数据库迁移
镜像启动前先运行迁移:
```bash
# 方式一:直接运行
docker run --rm \
--env-file .env \
myapp/migrate:latest up
# 方式二Kubernetes InitContainer
# 见下方 K8s 示例
```
## Kubernetes 部署示例
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 3
template:
spec:
containers:
- name: app
image: myapp/app:latest
envFrom:
- secretRef:
name: app-secrets
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitserver
spec:
replicas: 1
template:
spec:
containers:
- name: gitserver
image: myapp/gitserver:latest
ports:
- containerPort: 8022 # HTTP
- containerPort: 2222 # SSH
envFrom:
- secretRef:
name: app-secrets
volumeMounts:
- name: repos
mountPath: /data/repos
volumes:
- name: repos
persistentVolumeClaim:
claimName: git-repos
---
apiVersion: batch/v1
kind: Job
metadata:
name: migrate
spec:
template:
spec:
containers:
- name: migrate
image: myapp/migrate:latest
envFrom:
- secretRef:
name: app-secrets
args: ["up"]
restartPolicy: Never
```
## 构建缓存
使用 Docker BuildKit 的构建缓存:
- `--mount=type=cache,target=/usr/local/cargo/registry` — crates.io 依赖
- `--mount=type=cache,target=/usr/local/cargo/git` — git 依赖
- `--mount=type=cache,target=target` — 编译产物
建议挂载持久化缓存卷以加速增量构建:
```bash
docker buildx create --use
docker buildx build \
--cache-from=type=local,src=/tmp/cargo-cache \
--cache-to=type=local,dest=/tmp/cargo-cache \
-f docker/app.Dockerfile -t myapp/app .
```
## 跨平台构建
默认构建 x86_64 Linux 可执行文件。构建其他平台:
```bash
# ARM64
BUILD_TARGET=aarch64-unknown-linux-gnu ./docker/build.sh
# 需先安装对应 target
rustup target add aarch64-unknown-linux-gnu
```

52
docker/build.sh Normal file
View File

@ -0,0 +1,52 @@
#!/bin/bash
set -e
REGISTRY="${REGISTRY:-harbor.gitdata.me/gta_team}"
TAG="${TAG:-latest}"
BUILD_TARGET="${BUILD_TARGET:-x86_64-unknown-linux-gnu}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$SCRIPT_DIR/.."
# All images: (dockerfile, image-name)
declare -A ALL_IMAGES=(
[app]="docker/app.Dockerfile"
[gitserver]="docker/gitserver.Dockerfile"
[email-worker]="docker/email-worker.Dockerfile"
[git-hook]="docker/git-hook.Dockerfile"
[migrate]="docker/migrate.Dockerfile"
[operator]="docker/operator.Dockerfile"
)
# Filter by first argument if provided
TARGETS=("$@")
if [[ ${#TARGETS[@]} -eq 0 ]] || [[ "${TARGETS[0]}" == "all" ]]; then
TARGETS=("${!ALL_IMAGES[@]}")
fi
for name in "${TARGETS[@]}"; do
df="${ALL_IMAGES[$name]}"
if [[ -z "$df" ]]; then
echo "ERROR: unknown image '$name'"
echo "Available: ${!ALL_IMAGES[@]}"
exit 1
fi
if [[ ! -f "$df" ]]; then
echo "ERROR: $df not found"
exit 1
fi
image="${REGISTRY}/${name}:${TAG}"
echo "==> Building $image"
docker build \
--build-arg BUILD_TARGET="${BUILD_TARGET}" \
-f "$df" \
-t "$image" \
.
echo "==> $image done"
echo ""
done
echo "==> All images built:"
for name in "${TARGETS[@]}"; do
echo " ${REGISTRY}/${name}:${TAG}"
done

127
docker/crd/app-crd.yaml Normal file
View File

@ -0,0 +1,127 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: apps.code.dev
annotations:
controller-gen.kubebuilder.io/version: v0.16.0
spec:
group: code.dev
names:
kind: App
listKind: AppList
plural: apps
singular: app
shortNames:
- app
scope: Namespaced
versions:
- name: v1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Replicas
jsonPath: .spec.replicas
type: integer
- name: Ready
jsonPath: .status.phase
type: string
- name: Age
jsonPath: .metadata.creationTimestamp
type: date
schema:
openAPIV3Schema:
type: object
required: [spec]
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
required: []
properties:
image:
type: string
default: myapp/app:latest
replicas:
type: integer
default: 3
env:
type: array
items:
type: object
required: [name]
properties:
name:
type: string
value:
type: string
valueFrom:
type: object
properties:
secretRef:
type: object
required: [name, secretName, secretKey]
properties:
name:
type: string
secretName:
type: string
secretKey:
type: string
resources:
type: object
properties:
requests:
type: object
properties:
cpu:
type: string
memory:
type: string
limits:
type: object
properties:
cpu:
type: string
memory:
type: string
livenessProbe:
type: object
properties:
port:
type: integer
default: 8080
path:
type: string
default: /health
initialDelaySeconds:
type: integer
default: 5
readinessProbe:
type: object
properties:
port:
type: integer
default: 8080
path:
type: string
default: /health
initialDelaySeconds:
type: integer
default: 5
imagePullPolicy:
type: string
default: IfNotPresent
status:
type: object
properties:
readyReplicas:
type: integer
phase:
type: string

View File

@ -0,0 +1,94 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: emailworkers.code.dev
annotations:
controller-gen.kubebuilder.io/version: v0.16.0
spec:
group: code.dev
names:
kind: EmailWorker
listKind: EmailWorkerList
plural: emailworkers
singular: emailworker
shortNames:
- ew
scope: Namespaced
versions:
- name: v1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Age
jsonPath: .metadata.creationTimestamp
type: date
schema:
openAPIV3Schema:
type: object
required: [spec]
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
required: []
properties:
image:
type: string
default: myapp/email-worker:latest
env:
type: array
items:
type: object
required: [name]
properties:
name:
type: string
value:
type: string
valueFrom:
type: object
properties:
secretRef:
type: object
required: [name, secretName, secretKey]
properties:
name:
type: string
secretName:
type: string
secretKey:
type: string
resources:
type: object
properties:
requests:
type: object
properties:
cpu:
type: string
memory:
type: string
limits:
type: object
properties:
cpu:
type: string
memory:
type: string
imagePullPolicy:
type: string
default: IfNotPresent
status:
type: object
properties:
readyReplicas:
type: integer
phase:
type: string

View File

@ -0,0 +1,96 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: githooks.code.dev
annotations:
controller-gen.kubebuilder.io/version: v0.16.0
spec:
group: code.dev
names:
kind: GitHook
listKind: GitHookList
plural: githooks
singular: githook
shortNames:
- ghk
scope: Namespaced
versions:
- name: v1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Age
jsonPath: .metadata.creationTimestamp
type: date
schema:
openAPIV3Schema:
type: object
required: [spec]
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
required: []
properties:
image:
type: string
default: myapp/git-hook:latest
env:
type: array
items:
type: object
required: [name]
properties:
name:
type: string
value:
type: string
valueFrom:
type: object
properties:
secretRef:
type: object
required: [name, secretName, secretKey]
properties:
name:
type: string
secretName:
type: string
secretKey:
type: string
resources:
type: object
properties:
requests:
type: object
properties:
cpu:
type: string
memory:
type: string
limits:
type: object
properties:
cpu:
type: string
memory:
type: string
imagePullPolicy:
type: string
default: IfNotPresent
workerId:
type: string
status:
type: object
properties:
readyReplicas:
type: integer
phase:
type: string

View File

@ -0,0 +1,108 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: gitservers.code.dev
annotations:
controller-gen.kubebuilder.io/version: v0.16.0
spec:
group: code.dev
names:
kind: GitServer
listKind: GitServerList
plural: gitservers
singular: gitserver
shortNames:
- gs
scope: Namespaced
versions:
- name: v1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Age
jsonPath: .metadata.creationTimestamp
type: date
schema:
openAPIV3Schema:
type: object
required: [spec]
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
required: []
properties:
image:
type: string
default: myapp/gitserver:latest
env:
type: array
items:
type: object
required: [name]
properties:
name:
type: string
value:
type: string
valueFrom:
type: object
properties:
secretRef:
type: object
required: [name, secretName, secretKey]
properties:
name:
type: string
secretName:
type: string
secretKey:
type: string
resources:
type: object
properties:
requests:
type: object
properties:
cpu:
type: string
memory:
type: string
limits:
type: object
properties:
cpu:
type: string
memory:
type: string
sshServiceType:
type: string
default: NodePort
storageSize:
type: string
default: 10Gi
imagePullPolicy:
type: string
default: IfNotPresent
sshDomain:
type: string
sshPort:
type: integer
default: 22
httpPort:
type: integer
default: 8022
status:
type: object
properties:
readyReplicas:
type: integer
phase:
type: string

View File

@ -0,0 +1,87 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: migrates.code.dev
annotations:
controller-gen.kubebuilder.io/version: v0.16.0
spec:
group: code.dev
names:
kind: Migrate
listKind: MigrateList
plural: migrates
singular: migrate
shortNames:
- mig
scope: Namespaced
versions:
- name: v1
served: true
storage: true
subresources:
status: {}
additionalPrinterColumns:
- name: Status
jsonPath: .status.phase
type: string
- name: Age
jsonPath: .metadata.creationTimestamp
type: date
schema:
openAPIV3Schema:
type: object
required: [spec]
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
type: object
required: []
properties:
image:
type: string
default: myapp/migrate:latest
env:
type: array
description: "Must include APP_DATABASE_URL"
items:
type: object
required: [name]
properties:
name:
type: string
value:
type: string
valueFrom:
type: object
properties:
secretRef:
type: object
required: [name, secretName, secretKey]
properties:
name:
type: string
secretName:
type: string
secretKey:
type: string
command:
type: string
default: up
description: "Migration command: up, down, fresh, refresh, reset"
backoffLimit:
type: integer
default: 3
status:
type: object
properties:
phase:
type: string
startTime:
type: string
completionTime:
type: string

View File

@ -0,0 +1,36 @@
# ---- Stage 1: Build ----
FROM rust:1.94-bookworm AS builder
ARG BUILD_TARGET=x86_64-unknown-linux-gnu
ENV TARGET=${BUILD_TARGET}
RUN apt-get update && apt-get install -y --no-install-recommends \
pkg-config libssl-dev libclang-dev \
gcc g++ make \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
COPY Cargo.toml Cargo.lock ./
COPY libs/ libs/
COPY apps/email/ apps/email/
RUN cargo fetch
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git \
--mount=type=cache,target=target \
cargo build --release --package email-server --target ${TARGET}
# ---- Stage 2: Runtime ----
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates libssl3 \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /build/target/${TARGET}/release/email-server /app/email-worker
ENV APP_LOG_LEVEL=info
ENTRYPOINT ["/app/email-worker"]

View File

@ -0,0 +1,36 @@
# ---- Stage 1: Build ----
FROM rust:1.94-bookworm AS builder
ARG BUILD_TARGET=x86_64-unknown-linux-gnu
ENV TARGET=${BUILD_TARGET}
RUN apt-get update && apt-get install -y --no-install-recommends \
pkg-config libssl-dev libgit2-dev zlib1g-dev libclang-dev \
gcc g++ make \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
COPY Cargo.toml Cargo.lock ./
COPY libs/ libs/
COPY apps/git-hook/ apps/git-hook/
RUN cargo fetch
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git \
--mount=type=cache,target=target \
cargo build --release --package git-hook --target ${TARGET}
# ---- Stage 2: Runtime ----
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates libssl3 openssh-client \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /build/target/${TARGET}/release/git-hook /app/git-hook
ENV APP_LOG_LEVEL=info
ENTRYPOINT ["/app/git-hook"]

View File

@ -0,0 +1,41 @@
# ---- Stage 1: Build ----
FROM rust:1.94-bookworm AS builder
ARG BUILD_TARGET=x86_64-unknown-linux-gnu
ENV TARGET=${BUILD_TARGET}
RUN apt-get update && apt-get install -y --no-install-recommends \
pkg-config libssl-dev libgit2-dev zlib1g-dev libclang-dev \
gcc g++ make \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
COPY Cargo.toml Cargo.lock ./
COPY libs/ libs/
COPY apps/gitserver/ apps/gitserver/
RUN cargo fetch
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git \
--mount=type=cache,target=target \
cargo build --release --package gitserver --target ${TARGET}
# ---- Stage 2: Runtime ----
FROM debian:bookworm-slim AS runtime
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates libssl3 openssh-server \
&& rm -rf /var/lib/apt/lists/*
# SSH requires host keys and proper permissions
RUN mkdir -p /run/sshd && \
ssh-keygen -A && \
chmod 755 /etc/ssh
WORKDIR /app
COPY --from=builder /build/target/${TARGET}/release/gitserver /app/gitserver
ENV APP_LOG_LEVEL=info
ENTRYPOINT ["/app/gitserver"]

36
docker/migrate.Dockerfile Normal file
View File

@ -0,0 +1,36 @@
# ---- Stage 1: Build ----
FROM rust:1.94-bookworm AS builder
ARG BUILD_TARGET=x86_64-unknown-linux-gnu
ENV TARGET=${BUILD_TARGET}
RUN apt-get update && apt-get install -y --no-install-recommends \
pkg-config libssl-dev libclang-dev \
gcc g++ make \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
COPY Cargo.toml Cargo.lock ./
COPY libs/ libs/
COPY apps/migrate/ apps/migrate/
RUN cargo fetch
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git \
--mount=type=cache,target=target \
cargo build --release --package migrate-cli --target ${TARGET}
# ---- Stage 2: Runtime ----
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates libssl3 \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /build/target/${TARGET}/release/migrate /app/migrate
# Run migrations via: docker run --rm myapp/migrate up
ENTRYPOINT ["/app/migrate"]

View File

@ -0,0 +1,39 @@
# ---- Stage 1: Build ----
FROM rust:1.94-bookworm AS builder
ARG BUILD_TARGET=x86_64-unknown-linux-gnu
ENV TARGET=${BUILD_TARGET}
RUN apt-get update && apt-get install -y --no-install-recommends \
pkg-config libssl-dev libclang-dev \
gcc g++ make \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /build
COPY Cargo.toml Cargo.lock ./
COPY libs/config/ libs/config/
COPY apps/operator/ apps/operator/
RUN cargo fetch
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git \
--mount=type=cache,target=target \
cargo build --release --package operator --target ${TARGET}
# ---- Stage 2: Runtime ----
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates libssl3 \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /build/target/${TARGET}/release/operator /app/operator
# The operator reads POD_NAMESPACE and OPERATOR_IMAGE_PREFIX from env.
# It connects to the in-cluster Kubernetes API via the service account token.
# All child resources are created in the operator's own namespace.
ENV OPERATOR_LOG_LEVEL=info
ENTRYPOINT ["/app/operator"]

View File

@ -0,0 +1,128 @@
# ---- Namespace ----
apiVersion: v1
kind: Namespace
metadata:
name: code-system
---
# ---- ServiceAccount ----
apiVersion: v1
kind: ServiceAccount
metadata:
name: code-operator
namespace: code-system
---
# ---- RBAC: Role ----
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: code-operator
namespace: code-system
rules:
# CRDs we manage
- apiGroups: ["code.dev"]
resources: ["apps", "gitservers", "emailworkers", "githooks", "migrates"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
# Status subresources
- apiGroups: ["code.dev"]
resources: ["apps/status", "gitservers/status", "emailworkers/status", "githooks/status", "migrates/status"]
verbs: ["get", "patch", "update"]
# Child resources managed by App
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
# Child resources managed by GitServer
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
# Child resources managed by GitHook
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
# Child resources managed by Migrate
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete", "deletecollection"]
# Secrets (read-only for env var resolution)
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
---
# ---- RBAC: RoleBinding ----
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: code-operator
namespace: code-system
subjects:
- kind: ServiceAccount
name: code-operator
namespace: code-system
roleRef:
kind: Role
name: code-operator
apiGroup: rbac.authorization.k8s.io
---
# ---- Deployment ----
apiVersion: apps/v1
kind: Deployment
metadata:
name: code-operator
namespace: code-system
labels:
app.kubernetes.io/name: code-operator
app.kubernetes.io/managed-by: code-operator
app.kubernetes.io/part-of: code-system
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: code-operator
template:
metadata:
labels:
app.kubernetes.io/name: code-operator
app.kubernetes.io/managed-by: code-operator
app.kubernetes.io/part-of: code-system
spec:
serviceAccountName: code-operator
terminationGracePeriodSeconds: 10
volumes:
- name: tmp
emptyDir: {}
containers:
- name: operator
image: myapp/operator:latest
imagePullPolicy: IfNotPresent
env:
- name: OPERATOR_IMAGE_PREFIX
value: "myapp/"
- name: OPERATOR_LOG_LEVEL
value: "info"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
requests:
cpu: 10m
memory: 64Mi
limits:
memory: 256Mi
volumeMounts:
- name: tmp
mountPath: /tmp
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL

View File

@ -0,0 +1,280 @@
# Example: deploying the full code system into `code-system` namespace.
#
# Prerequisites:
# 1. Install CRDs: kubectl apply -f ../crd/
# 2. Install Operator: kubectl apply -f ../operator/deployment.yaml
#
# Then apply this file:
# kubectl apply -f example/code-system.yaml
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
namespace: code-system
type: Opaque
stringData:
APP_DATABASE_URL: "postgres://user:password@postgres:5432/codedb?sslmode=disable"
APP_REDIS_URLS: "redis://redis:6379"
APP_SMTP_HOST: "smtp.example.com"
APP_SMTP_PORT: "587"
APP_SMTP_USERNAME: "noreply@example.com"
APP_SMTP_PASSWORD: "change-me"
APP_SMTP_FROM: "noreply@example.com"
APP_AI_BASIC_URL: "https://api.openai.com/v1"
APP_AI_API_KEY: "sk-change-me"
APP_SSH_SERVER_PRIVATE_KEY: |
-----BEGIN OPENSSH PRIVATE KEY-----
... paste your SSH private key here ...
-----END OPENSSH PRIVATE KEY-----
APP_SSH_SERVER_PUBLIC_KEY: "ssh-ed25519 AAAAC3... your-pub-key"
---
# ---- App (main web service, 3 replicas) ----
apiVersion: code.dev/v1
kind: App
metadata:
name: app
namespace: code-system
spec:
image: myapp/app:latest
replicas: 3
imagePullPolicy: IfNotPresent
env:
- name: APP_DATABASE_URL
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_DATABASE_URL
- name: APP_REDIS_URLS
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_REDIS_URLS
- name: APP_SMTP_HOST
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SMTP_HOST
- name: APP_SMTP_USERNAME
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SMTP_USERNAME
- name: APP_SMTP_PASSWORD
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SMTP_PASSWORD
- name: APP_SMTP_FROM
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SMTP_FROM
- name: APP_AI_BASIC_URL
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_AI_BASIC_URL
- name: APP_AI_API_KEY
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_AI_API_KEY
- name: APP_DOMAIN_URL
value: "https://example.com"
- name: APP_LOG_LEVEL
value: "info"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
port: 8080
path: /health
initialDelaySeconds: 10
readinessProbe:
port: 8080
path: /health
initialDelaySeconds: 5
---
# ---- GitServer (git HTTP + SSH, single instance) ----
apiVersion: code.dev/v1
kind: GitServer
metadata:
name: gitserver
namespace: code-system
spec:
image: myapp/gitserver:latest
imagePullPolicy: IfNotPresent
env:
- name: APP_DATABASE_URL
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_DATABASE_URL
- name: APP_REDIS_URLS
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_REDIS_URLS
- name: APP_SSH_SERVER_PRIVATE_KEY
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SSH_SERVER_PRIVATE_KEY
- name: APP_SSH_SERVER_PUBLIC_KEY
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SSH_SERVER_PUBLIC_KEY
- name: APP_SSH_DOMAIN
value: "git.example.com"
- name: APP_REPOS_ROOT
value: "/data/repos"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 1000m
memory: 1Gi
sshServiceType: NodePort # Use LoadBalancer in production
sshPort: 22
httpPort: 8022
storageSize: 50Gi
---
# ---- EmailWorker (single instance) ----
apiVersion: code.dev/v1
kind: EmailWorker
metadata:
name: email-worker
namespace: code-system
spec:
image: myapp/email-worker:latest
imagePullPolicy: IfNotPresent
env:
- name: APP_DATABASE_URL
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_DATABASE_URL
- name: APP_REDIS_URLS
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_REDIS_URLS
- name: APP_SMTP_HOST
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SMTP_HOST
- name: APP_SMTP_USERNAME
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SMTP_USERNAME
- name: APP_SMTP_PASSWORD
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SMTP_PASSWORD
- name: APP_SMTP_FROM
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_SMTP_FROM
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
memory: 256Mi
---
# ---- GitHook (single instance) ----
apiVersion: code.dev/v1
kind: GitHook
metadata:
name: git-hook
namespace: code-system
spec:
image: myapp/git-hook:latest
imagePullPolicy: IfNotPresent
env:
- name: APP_DATABASE_URL
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_DATABASE_URL
- name: APP_REDIS_URLS
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_REDIS_URLS
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
memory: 256Mi
---
# ---- Migrate (auto-triggered on apply) ----
apiVersion: code.dev/v1
kind: Migrate
metadata:
name: migrate
namespace: code-system
spec:
image: myapp/migrate:latest
command: up
backoffLimit: 3
env:
- name: APP_DATABASE_URL
valueFrom:
secretRef:
name: app-secrets
secretName: app-secrets
secretKey: APP_DATABASE_URL
---
# ---- Ingress (example for App) ----
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: code-system
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "100m"
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app
port:
number: 80

903
docs/ARCHITECTURE-LAYERS.md Normal file
View File

@ -0,0 +1,903 @@
# Code 项目架构分层图
> 一个现代化的代码协作与团队沟通平台
>
> 技术栈Rust (后端) + TypeScript/React (前端) + Kubernetes (部署)
---
## 系统全景架构
```
┌─────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ 用 户 层 │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │
│ │ Web 浏览器 │ │ Git 客户端 │ │ 外部 CI/CD │ │
│ │ (React SPA) │ │ (git/SSH) │ │ (GitHub/GitLab) │ │
│ └────────┬─────────┘ └────────┬─────────┘ └────────┬─────────┘ │
└──────────────────┼────────────────────────────────┼────────────────────────────────┼────────────────┘
│ │ │
│ HTTP/WS │ Git Protocol │ Webhook
│ │ │
┌──────────────────▼────────────────────────────────▼────────────────────────────────▼────────────────┐
│ 接入层 (Ingress/LB) │
│ │
│ ┌──────────────────────────────────────────────────────────────────────────────────┐ │
│ │ Load Balancer / K8s Ingress (:80/:443) │ │
│ └──────────────────────┬──────────────────────┬──────────────────────┬─────────────┘ │
└────────────────────────────────┼──────────────────────┼──────────────────────┼──────────────────────┘
│ │ │
│ REST API │ Git Ops │ Webhook
│ │ │
┌────────────────────────────────▼──────────────────────▼──────────────────────▼──────────────────────┐
│ 应 用 服 务 层 (apps/) │
│ │
│ ┌────────────────────┐ ┌────────────────────┐ ┌────────────────────┐ ┌────────────────────┐ │
│ │ apps/app │ │ apps/gitserver │ │ apps/git-hook │ │ apps/email │ │
│ │ 主 Web API 服务 │ │ Git HTTP/SSH 服务 │ │ Git Hook 处理器 │ │ 邮件发送 Worker │ │
│ │ :8080 │ │ :8021/:2222 │ │ Worker │ │ Worker │ │
│ │ HTTP + WebSocket │ │ HTTP + SSH │ │ 异步任务 │ │ 队列消费 │ │
│ │ 多实例部署 │ │ 单实例 │ │ 单实例 │ │ 单实例 │ │
│ └─────────┬──────────┘ └─────────┬──────────┘ └─────────┬──────────┘ └─────────┬──────────┘ │
└─────────────┼───────────────────────┼───────────────────────┼───────────────────────┼───────────────┘
│ │ │ │
│ │ │ │
┌─────────────▼───────────────────────▼───────────────────────▼───────────────────────▼───────────────┐
│ 应 用 编 排 层 (apps/operator) │
│ │
│ ┌───────────────────────────────────────────────────────────────────────────────────────┐ │
│ │ apps/operator (Kubernetes Operator) │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ App CRD │ │GitSrv CRD│ │Email CRD │ │Hook CRD │ │Mig CRD │ │ │
│ │ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ │
│ │ │ │ │ │ │ │ │
│ │ ▼ ▼ ▼ ▼ ▼ │ │
│ │ ┌──────────────────────────────────────────────────────────────────────────┐ │ │
│ │ │ K8s 资源 (Deployments, Services, PVCs, Jobs) │ │ │
│ │ └──────────────────────────────────────────────────────────────────────────┘ │ │
│ └───────────────────────────────────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────────────────────────────────────────┘
│ │ │ │
│ │ │ │
┌─────────────▼───────────────────────▼───────────────────────▼───────────────────────▼───────────────┐
│ 业 务 逻 辑 层 (libs/service) │
│ │
│ ┌─────────────────────────────────────────────────────────────────────────────────────────┐ │
│ │ AppService { 全局服务聚合 } │ │
│ │ │ │
│ │ ┌─────────┐ ┌────────┐ ┌─────┐ ┌──────────┐ ┌───────────┐ ┌────────┐ ┌───────────┐ │ │
│ │ │ agent/ │ │ auth/ │ │git/ │ │ issue/ │ │ project/ │ │ user/ │ │ pull_req/ │ │ │
│ │ │ (8文件) │ │ (10) │ │(16) │ │ (8) │ │ (20) │ │ (12) │ │ (5) │ │ │
│ │ │ AI模型 │ │ 认证 │ │Git │ │ Issue │ │ 项目管理 │ │ 用户 │ │ PR审查 │ │ │
│ │ │ 管理 │ │ 会话 │ │操作 │ │ 追踪 │ │ 权限控制 │ │ 偏好 │ │ 合并 │ │ │
│ │ └─────────┘ └────────┘ └─────┘ └──────────┘ └───────────┘ └────────┘ └───────────┘ │ │
│ │ │ │
│ │ + utils/(project,repo,user) + ws_token + error + Pager │ │
│ └──────────────────────────────────────┬──────────────────────────────────────────────────┘ │
└──────────────────────────────────────────┼──────────────────────────────────────────────────────────┘
┌──────────────────────┼──────────────────────┐
│ │ │
┌───────────────────▼──────────┐ ┌────────▼─────────────┐ ┌──────▼────────────────────────────┐
│ HTTP 路由层 (libs/api) │ │ WebSocket 层 │ │ 后台 Worker 层 │
│ 100 个路由文件 │ │ (libs/room) │ │ │
│ │ │ │ │ libs/queue: │
│ /api/auth/* (9端点) │ │ /ws │ │ MessageProducer │
│ /api/git/* (100+端点) │ │ /ws/rooms/{id} │ │ RedisPubSub │
│ /api/projects/* (50+端点) │ │ /ws/projects/{id} │ │ room_worker_task │
│ /api/issue/* (30+端点) │ │ │ │ start_email_worker │
│ /api/room/* (40+端点) │ │ 实时消息广播 │ │ │
│ /api/pull_request/* (20端点)│ │ 多实例同步 │ │ libs/git/hook: │
│ /api/agent/* (15端点) │ │ AI 流式输出 │ │ GitServiceHooks │
│ /api/user/* (20端点) │ │ │ │ GitHookPool │
│ /api/openapi/* (文档) │ │ │ │ │
└───────────┬────────────────┘ └──────────┬───────────┘ └─────────────┬───────────────────────┘
│ │ │
└─────────────────────────────┼───────────────────────────┘
┌─────────────────────────────────────────▼────────────────────────────────────────────────────────┐
│ 基 础 设 施 层 (Infrastructure Libs) │
│ │
│ ┌───────────────────┐ ┌───────────────────┐ ┌───────────────────┐ ┌───────────────────┐ │
│ │ libs/models │ │ libs/db │ │ libs/config │ │ libs/session │ │
│ │ 92 个实体文件 │ │ 数据库连接池 │ │ 全局配置管理 │ │ 会话管理中间件 │ │
│ │ Sea-ORM 实体定义 │ │ 缓存抽象 │ │ .env 加载 │ │ Redis Store │ │
│ │ 类型别名 │ │ 重试机制 │ │ 12 子模块 │ │ JWT + Cookie │ │
│ └─────────┬─────────┘ └─────────┬─────────┘ └─────────┬─────────┘ └─────────┬─────────┘ │
│ │ │ │ │ │
│ ┌─────────▼─────────┐ ┌────────▼─────────┐ ┌─────────▼─────────┐ ┌─────────▼─────────┐ │
│ │ libs/git │ │ libs/agent │ │ libs/email │ │ libs/avatar │ │
│ │ 19 子模块 │ │ 6 子模块 │ │ SMTP 邮件发送 │ │ 图片处理 │ │
│ │ libgit2 封装 │ │ OpenAI 集成 │ │ lettre 客户端 │ │ image crate │ │
│ │ HTTP + SSH 协议 │ │ Qdrant 向量库 │ │ 模板引擎 │ │ 缩放/裁剪 │ │
│ └─────────┬─────────┘ └─────────┬─────────┘ └─────────┬─────────┘ └─────────┬─────────┘ │
│ │ │ │ │ │
│ ┌─────────▼─────────┐ ┌────────▼─────────┐ ┌─────────▼───────────────────────▼─────────┐ │
│ │ libs/queue │ │ libs/room │ │ libs/migrate │ │
│ │ 消息队列 │ │ 实时聊天室 │ │ 82+ 数据库迁移脚本 │ │
│ │ Redis Streams │ │ 19 子模块 │ │ sea-orm-migration │ │
│ │ Pub/Sub │ │ WebSocket 管理 │ │ up/down/fresh/refresh/reset │ │
│ └─────────┬─────────┘ └─────────┬─────────┘ └─────────────────────────────────────────┘ │
│ │ │ │
│ ┌─────────▼─────────┐ ┌────────▼─────────┐ │
│ │ libs/webhook │ │ libs/rpc │ libs/transport │
│ │ (占位) │ │ (占位) │ (占位) │
│ └───────────────────┘ └─────────────────┘ │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
│ │
│ │
┌─────────────▼──────────────────────▼────────────────────────────────────────────────────────────┐
│ 存 储 层 │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │
│ │ PostgreSQL │ │ Redis │ │ Qdrant │ │ 文件系统 │ │
│ │ :5432 │ │ :6379 │ │ :6333 │ │ │ │
│ │ │ │ │ │ │ │ /data/avatars │ │
│ │ • 用户数据 │ │ • 会话存储 │ │ • 向量嵌入 │ │ /data/repos │ │
│ │ • 项目/仓库 │ │ • 缓存数据 │ │ • AI 索引 │ │ • 头像图片 │ │
│ │ • Issue/PR │ │ • Pub/Sub │ │ • 相似度检索 │ │ • Git 仓库 │ │
│ │ • Room 消息 │ │ • Stream 队列 │ │ │ │ • 上传文件 │ │
│ │ • 评论/标签 │ │ • Hook 队列 │ │ │ │ │ │
│ └──────────────────┘ └──────────────────┘ └──────────────────┘ └──────────────────┘ │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
│ 外部 API
┌─────────────▼────────────────────────────────────────────────────────────────────────────────────┐
│ 外 部 服 务 │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │
│ │ SMTP 服务器 │ │ OpenAI API │ │ Embedding API │ │
│ │ :587 │ │ HTTPS │ │ HTTPS │ │
│ │ │ │ │ │ │ │
│ │ • 邮件发送 │ │ • 聊天补全 │ │ • 文本向量化 │ │
│ │ • 通知邮件 │ │ • AI 助手 │ │ • 相似度计算 │ │
│ └──────────────────┘ └──────────────────┘ └──────────────────┘ │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
```
---
## 前端架构分层
```
┌────────────────────────────────────────────────────────────────────────────────────────────────┐
│ 前 端 应 用 层 (src/) │
│ │
│ ┌──────────────────────────────────────────────────────────────────────────────────────────┐ │
│ │ Vite + React + TypeScript │ │
│ │ │ │
│ │ src/main.tsx ──▶ App.tsx ──▶ BrowserRouter ──▶ Routes │ │
│ │ │ │
│ │ ┌───────────────────────┐ ┌───────────────────────┐ ┌───────────────────────┐ │ │
│ │ │ 页面层 (app/) │ │ 组件层 (components/) │ │ 状态管理层 │ │ │
│ │ │ 59 页面组件 │ │ 108 UI 组件 │ │ │ │ │
│ │ │ │ │ │ │ TanStack Query │ │ │
│ │ │ auth/ (4) │ │ ui/ (66) │ │ (服务端状态) │ │ │
│ │ │ init/ (2) │ │ room/ (20) │ │ │ │ │
│ │ │ user/ (1) │ │ repository/ (8) │ │ React Context │ │ │
│ │ │ project/ (22) │ │ project/ (4) │ │ (全局状态) │ │ │
│ │ │ repository/ (12) │ │ auth/ (2) │ │ │ │ │
│ │ │ settings/ (8) │ │ layout/ (2) │ │ Local State │ │ │
│ │ │ │ │ │ │ (组件状态) │ │ │
│ │ └───────────┬───────────┘ └───────────┬───────────┘ └───────────┬────────────┘ │ │
│ │ │ │ │ │ │
│ │ └────────────────────────────┼────────────────────────────┘ │ │
│ │ │ │ │
│ │ ┌────────────────────────────────────────┼────────────────────────────────────────┐ │ │
│ │ │ API 客户端层 │ │ │
│ │ │ │ │ │
│ │ │ src/client/ ──▶ openapi-ts 自动生成 (从 openapi.json) │ │ │
│ │ │ 400+ API 函数 + 完整 TypeScript 类型 │ │ │
│ │ │ Axios HTTP 客户端 │ │ │
│ │ └──────────────────────────────────────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ ┌──────────────────────────────────────────────────────────────────────────────────┐ │ │
│ │ │ 工具层 │ │ │
│ │ │ │ │ │
│ │ │ src/hooks/ ──▶ 自定义 React Hooks │ │ │
│ │ │ src/lib/ ──▶ 工具函数 (api-error, rsa, date 等) │ │ │
│ │ │ src/contexts/ ──▶ React Context (User, Theme 等) │ │ │
│ │ │ src/assets/ ──▶ 静态资源 (图片、图标) │ │ │
│ │ └──────────────────────────────────────────────────────────────────────────────────┘ │ │
│ └──────────────────────────────────────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
```
---
## 前端路由结构
```
/ 首页/仪表板
├── /auth/ 认证路由
│ ├── /login 登录页
│ ├── /register 注册页
│ ├── /password/reset 密码重置
│ └── /verify-email 邮箱验证
├── /init/ 初始化路由
│ ├── /project 初始化项目
│ └── /repository 初始化仓库
├── /user/:user 用户资料页
├── /settings/ 个人设置
│ ├── /profile 个人资料
│ ├── /account 账户设置
│ ├── /security 安全设置
│ ├── /tokens 访问令牌
│ ├── /ssh-keys SSH 密钥
│ ├── /preferences 偏好设置
│ └── /activity 活动日志
├── /project/:project_name/ 项目路由
│ ├── / 项目概览
│ ├── /activity 项目活动
│ ├── /repositories 仓库列表
│ ├── /issues Issue 列表
│ │ ├── /new 新建 Issue
│ │ └── /:issueNumber Issue 详情
│ ├── /boards 看板列表
│ │ └── /:boardId 看板详情
│ ├── /members 成员管理
│ ├── /room 聊天室列表
│ │ └── /:roomId 聊天室
│ ├── /articles 文章
│ ├── /resources 资源
│ └── /settings/ 项目设置
│ ├── /general 通用设置
│ ├── /labels 标签管理
│ ├── /billing 账单
│ ├── /members 成员管理
│ ├── /oauth OAuth 配置
│ └── /webhook Webhook 管理
├── /repository/:namespace/:repoName/ 仓库路由
│ ├── / 仓库概览
│ ├── /branches 分支管理
│ ├── /commits 提交历史
│ │ └── /:oid 提交详情
│ ├── /contributors 贡献者
│ ├── /files 文件浏览
│ ├── /tags 标签
│ ├── /pull-requests PR 列表
│ │ ├── /new 新建 PR
│ │ └── /:prNumber PR 详情
│ └── /settings 仓库设置
├── /search 全局搜索
└── /notifications 通知中心
```
---
## 后端服务依赖关系
```
┌──────────────────────────────────────────────────────────────────────────────────────┐
│ apps/ 应用依赖关系 │
│ │
│ apps/app ────────────────┐ │
│ apps/email ──────────────┤ │
│ apps/git-hook ───────────┤──▶ libs/config (全局配置) │
│ apps/gitserver ──────────┤──▶ libs/db (数据库连接池 + 缓存) │
│ apps/migrate ────────────┤──▶ libs/session (会话管理) │
│ apps/operator ───────────┘──▶ libs/migrate (数据库迁移) │
│ ├──▶ libs/service (业务逻辑层) │
│ │ │ │
│ │ ├──▶ libs/api (HTTP 路由) │
│ │ │ │
│ │ ├──▶ libs/agent (AI 服务) │
│ │ ├──▶ libs/avatar (头像处理) │
│ │ ├──▶ libs/email (邮件发送) │
│ │ ├──▶ libs/room (聊天室) │
│ │ │ │ │
│ │ │ └──▶ libs/queue (消息队列) │
│ │ │ │
│ │ └──▶ libs/git (Git 操作) │
│ │ │ │
│ │ ├──▶ git2 (libgit2 绑定) │
│ │ ├──▶ git2-hooks (Git 钩子) │
│ │ └──▶ russh (SSH 协议) │
│ │ │
│ └──▶ libs/models (数据模型 - 所有层共享) │
│ │ │
│ ├──▶ users/ (12 实体) │
│ ├──▶ projects/ (19 实体) │
│ ├──▶ repos/ (16 实体) │
│ ├──▶ issues/ (10 实体) │
│ ├──▶ pull_request/ (5 实体) │
│ ├──▶ rooms/ (11 实体) │
│ ├──▶ agents/ (6 实体) │
│ ├──▶ ai/ (3 实体) │
│ └──▶ system/ (3 实体) │
└──────────────────────────────────────────────────────────────────────────────────────┘
```
---
## libs/models 实体分组
```
┌─────────────────────────────────────────────────────────────────────────┐
│ libs/models 实体分组 (92 个) │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Users (12 实体) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ user 用户基本信息 │ │
│ │ user_2fa 双因素认证 │ │
│ │ user_activity_log 用户活动日志 │ │
│ │ user_email 用户邮箱 │ │
│ │ user_email_change 邮箱变更历史 │ │
│ │ user_notification 用户通知 │ │
│ │ user_password 用户密码 │ │
│ │ user_password_reset 密码重置令牌 │ │
│ │ user_preferences 用户偏好设置 │ │
│ │ user_relation 用户关系 │ │
│ │ user_ssh_key SSH 密钥 │ │
│ │ user_token 访问令牌 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Projects (19 实体) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ project 项目基本信息 │ │
│ │ project_access_log 访问日志 │ │
│ │ project_activity 活动记录 │ │
│ │ project_audit_log 审计日志 │ │
│ │ project_billing 账单信息 │ │
│ │ project_billing_history 账单历史 │ │
│ │ project_board 看板 │ │
│ │ project_board_card 看板卡片 │ │
│ │ project_board_column 看板列 │ │
│ │ project_follow 项目关注 │ │
│ │ project_history_name 历史名称 │ │
│ │ project_label 项目标签 │ │
│ │ project_like 项目点赞 │ │
│ │ project_member_ 成员邀请 │ │
│ │ invitations │ │
│ │ project_member_join_ 加入问答 │ │
│ │ answers │ │
│ │ project_member_join_ 加入请求 │ │
│ │ request │ │
│ │ project_member_join_ 加入设置 │ │
│ │ settings │ │
│ │ project_members 项目成员 │ │
│ │ project_watch 项目观看 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Repos (16 实体) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ repo 仓库基本信息 │ │
│ │ repo_branch 分支信息 │ │
│ │ repo_branch_protect 分支保护 │ │
│ │ repo_collaborator 协作者 │ │
│ │ repo_commit 提交记录 │ │
│ │ repo_fork 仓库 Fork │ │
│ │ repo_history_name 历史名称 │ │
│ │ repo_hook Git 钩子 │ │
│ │ repo_lfs_lock LFS 锁定 │ │
│ │ repo_lfs_object LFS 对象 │ │
│ │ repo_lock 仓库锁定 │ │
│ │ repo_star 仓库星标 │ │
│ │ repo_tag 仓库标签 │ │
│ │ repo_upstream 上游仓库 │ │
│ │ repo_watch 仓库观看 │ │
│ │ repo_webhook 仓库 Webhook │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Issues (10 实体) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ issue Issue 基本信息 │ │
│ │ issue_assignee Issue 负责人 │ │
│ │ issue_comment Issue 评论 │ │
│ │ issue_comment_reaction 评论表情 │ │
│ │ issue_label Issue 标签 │ │
│ │ issue_pull_request Issue 关联 PR │ │
│ │ issue_reaction Issue 表情 │ │
│ │ issue_repo Issue 仓库 │ │
│ │ issue_subscriber Issue 订阅者 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Pull Requests (5 实体) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ pull_request PR 基本信息 │ │
│ │ pull_request_commit PR 提交记录 │ │
│ │ pull_request_review PR 审查 │ │
│ │ pull_request_review_ PR 审查评论 │ │
│ │ comment │ │
│ │ pull_request_review_ PR 审查请求 │ │
│ │ request │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Rooms (11 实体) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ room 聊天室基本信息 │ │
│ │ room_ai 聊天室 AI 配置 │ │
│ │ room_category 聊天室分类 │ │
│ │ room_member 聊天室成员 │ │
│ │ room_message 聊天消息 │ │
│ │ room_message_edit_ 消息编辑历史 │ │
│ │ history │ │
│ │ room_message_reaction 消息表情 │ │
│ │ room_notifications 聊天室通知 │ │
│ │ room_pin 聊天室置顶 │ │
│ │ room_thread 聊天室 Thread │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Agents (6 实体) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ model AI 模型 │ │
│ │ model_capability 模型能力 │ │
│ │ model_parameter_profile 模型参数配置 │ │
│ │ model_pricing 模型定价 │ │
│ │ model_provider 模型提供商 │ │
│ │ model_version 模型版本 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ AI (3 实体) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ ai_session AI 会话 │ │
│ │ ai_tool_auth AI 工具认证 │ │
│ │ ai_tool_call AI 工具调用 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ System (3 实体) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ label 系统标签 │ │
│ │ notify 系统通知 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## libs/service 业务模块
```
┌─────────────────────────────────────────────────────────────────────────┐
│ libs/service 业务模块 (93 个文件) │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ agent/ AI 模型管理 (8 文件) │
│ ├── code_review AI 代码审查 │
│ ├── model AI 模型管理 │
│ ├── model_capability 模型能力管理 │
│ ├── model_parameter_ 模型参数配置 │
│ │ profile │
│ ├── model_pricing 模型定价管理 │
│ ├── model_version 模型版本管理 │
│ ├── pr_summary PR 摘要生成 │
│ └── provider 模型提供商管理 │
│ │
│ auth/ 认证管理 (10 文件) │
│ ├── captcha 验证码管理 │
│ ├── email 邮箱认证 │
│ ├── login 登录逻辑 │
│ ├── logout 登出逻辑 │
│ ├── me 当前用户信息 │
│ ├── password 密码管理 │
│ ├── register 注册逻辑 │
│ ├── rsa RSA 加密 │
│ └── totp TOTP 双因素认证 │
│ │
│ git/ Git 操作 (16 文件) │
│ ├── archive 仓库归档 │
│ ├── blocking 阻塞操作 │
│ ├── blame Git Blame │
│ ├── blob Blob 操作 │
│ ├── branch 分支操作 │
│ ├── branch_ 分支保护 │
│ │ protection │
│ ├── commit 提交操作 │
│ ├── contributors 贡献者统计 │
│ ├── diff Diff 操作 │
│ ├── init 仓库初始化 │
│ ├── refs 引用操作 │
│ ├── repo 仓库操作 │
│ ├── star 星标操作 │
│ ├── tag 标签操作 │
│ ├── tree 树操作 │
│ └── watch 观看操作 │
│ │
│ issue/ Issue 管理 (8 文件) │
│ ├── assignee 负责人管理 │
│ ├── comment 评论管理 │
│ ├── issue Issue CRUD │
│ ├── label 标签管理 │
│ ├── pull_request Issue 关联 PR │
│ ├── reaction 表情回应 │
│ ├── repo 仓库 Issue │
│ └── subscriber 订阅者管理 │
│ │
│ project/ 项目管理 (20 文件) │
│ ├── activity 项目活动 │
│ ├── audit 审计日志 │
│ ├── avatar 项目头像 │
│ ├── billing 账单管理 │
│ ├── board 看板管理 │
│ ├── can_use 权限检查 │
│ ├── info 项目信息 │
│ ├── init 项目初始化 │
│ ├── invitation 邀请管理 │
│ ├── join_answers 加入问答 │
│ ├── join_request 加入请求 │
│ ├── join_settings 加入设置 │
│ ├── labels 标签管理 │
│ ├── like 点赞管理 │
│ ├── members 成员管理 │
│ ├── repo 仓库管理 │
│ ├── repo_ 仓库权限 │
│ │ permission │
│ ├── settings 项目设置 │
│ ├── standard 项目标准 │
│ ├── transfer_repo 仓库转移 │
│ └── watch 观看管理 │
│ │
│ pull_request/ PR 管理 (5 文件) │
│ ├── merge PR 合并 │
│ ├── pull_request PR CRUD │
│ ├── review PR 审查 │
│ ├── review_comment 审查评论 │
│ └── review_request 审查请求 │
│ │
│ user/ 用户管理 (12 文件) │
│ ├── access_key 访问密钥 │
│ ├── avatar 用户头像 │
│ ├── chpc 用户 CHPC │
│ ├── notification 通知管理 │
│ ├── notify 通知发送 │
│ ├── preferences 偏好设置 │
│ ├── profile 用户资料 │
│ ├── projects 用户项目 │
│ ├── repository 用户仓库 │
│ ├── ssh_key SSH 密钥 │
│ ├── subscribe 订阅管理 │
│ └── user_info 用户信息 │
│ │
│ utils/ 工具函数 (3 文件) │
│ ├── project 项目工具 │
│ ├── repo 仓库工具 │
│ └── user 用户工具 │
│ │
│ ws_token WebSocket Token 服务 │
│ error 服务层错误 │
│ Pager 分页结构体 │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## libs/api 路由模块
```
┌─────────────────────────────────────────────────────────────────────────┐
│ libs/api 路由模块 (100 个文件) │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ auth/ 认证路由 (9 文件) │
│ ├── captcha 验证码接口 │
│ ├── email 邮箱认证接口 │
│ ├── login 登录接口 │
│ ├── logout 登出接口 │
│ ├── me 当前用户接口 │
│ ├── password 密码接口 │
│ ├── register 注册接口 │
│ ├── totp TOTP 接口 │
│ └── ws_token WebSocket Token 接口 │
│ │
│ git/ Git 路由 (18 文件) │
│ ├── archive 归档接口 │
│ ├── blame Blame 接口 │
│ ├── blob Blob 接口 │
│ ├── branch 分支接口 │
│ ├── branch_ 分支保护接口 │
│ │ protection │
│ ├── commit 提交接口 │
│ ├── contributors 贡献者接口 │
│ ├── diff Diff 接口 │
│ ├── init 初始化接口 │
│ ├── refs 引用接口 │
│ ├── repo 仓库接口 │
│ ├── star 星标接口 │
│ ├── tag 标签接口 │
│ ├── tree 树接口 │
│ └── watch 观看接口 │
│ │
│ project/ 项目路由 (17 文件) │
│ ├── activity 活动接口 │
│ ├── audit 审计接口 │
│ ├── billing 账单接口 │
│ ├── board 看板接口 │
│ ├── info 信息接口 │
│ ├── init 初始化接口 │
│ ├── invitation 邀请接口 │
│ ├── join_answers 加入问答接口 │
│ ├── join_request 加入请求接口 │
│ ├── join_settings 加入设置接口 │
│ ├── labels 标签接口 │
│ ├── like 点赞接口 │
│ ├── members 成员接口 │
│ ├── repo 仓库接口 │
│ ├── settings 设置接口 │
│ ├── transfer_repo 仓库转移接口 │
│ └── watch 观看接口 │
│ │
│ issue/ Issue 路由 (10 文件) │
│ ├── assignee 负责人接口 │
│ ├── comment 评论接口 │
│ ├── comment_ 评论表情接口 │
│ │ reaction │
│ ├── issue_label Issue 标签接口 │
│ ├── label 标签接口 │
│ ├── pull_request Issue 关联 PR 接口 │
│ ├── reaction 表情接口 │
│ ├── repo 仓库 Issue 接口 │
│ └── subscriber 订阅者接口 │
│ │
│ room/ 聊天室路由 (14 文件) │
│ ├── ai AI 接口 │
│ ├── category 分类接口 │
│ ├── draft_and_ 草稿和历史接口 │
│ │ history │
│ ├── member 成员接口 │
│ ├── message 消息接口 │
│ ├── notification 通知接口 │
│ ├── pin 置顶接口 │
│ ├── reaction 表情接口 │
│ ├── room 聊天室接口 │
│ ├── thread Thread 接口 │
│ ├── ws WebSocket 接口 │
│ ├── ws_handler WebSocket 处理器 │
│ ├── ws_types WebSocket 类型 │
│ └── ws_universal 通用 WebSocket 接口 │
│ │
│ pull_request/ PR 路由 (5 文件) │
│ ├── merge 合并接口 │
│ ├── pull_request PR CRUD 接口 │
│ ├── review 审查接口 │
│ ├── review_comment 审查评论接口 │
│ └── review_request 审查请求接口 │
│ │
│ agent/ AI Agent 路由 (8 文件) │
│ ├── code_review 代码审查接口 │
│ ├── model 模型接口 │
│ ├── model_ 模型能力接口 │
│ │ capability │
│ ├── model_ 模型参数配置接口 │
│ │ parameter_profile │
│ ├── model_pricing 模型定价接口 │
│ ├── model_version 模型版本接口 │
│ ├── pr_summary PR 摘要接口 │
│ └── provider 模型提供商接口 │
│ │
│ user/ 用户路由 (10 文件) │
│ ├── access_key 访问密钥接口 │
│ ├── chpc CHPC 接口 │
│ ├── notification 通知接口 │
│ ├── preferences 偏好接口 │
│ ├── profile 资料接口 │
│ ├── projects 项目接口 │
│ ├── repository 仓库接口 │
│ ├── ssh_key SSH 密钥接口 │
│ ├── subscribe 订阅接口 │
│ └── user_info 用户信息接口 │
│ │
│ openapi/ OpenAPI 文档生成 │
│ route/ 路由聚合 │
│ error/ API 错误处理 │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## 服务间通信机制
```
┌────────────────────────────────────────────────────────────────────────────────────────────┐
│ 服务间通信机制 │
│ │
│ ┌──────────────────────────────────────────────────────────────────────────────────┐ │
│ │ Redis (核心通信总线) │ │
│ │ │ │
│ │ Redis Streams ──▶ 异步消息队列 │ │
│ │ ├── room:stream:{room_id} 房间消息持久化 │ │
│ │ └── email:stream 邮件发送队列 │ │
│ │ │ │
│ │ Redis Pub/Sub ──▶ 实时事件广播 │ │
│ │ ├── room:pub:{room_id} 房间级广播 │ │
│ │ └── project:pub:{proj_id} 项目级广播 │ │
│ │ │ │
│ │ Redis Lists ──▶ 任务队列 │ │
│ │ ├── {hook}:sync Git Hook 同步任务 │ │
│ │ ├── {hook}:fsck Git Hook 完整性检查 │ │
│ │ └── {hook}:gc Git Hook 垃圾回收 │ │
│ └──────────────────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────────────────────────────────┐ │
│ │ HTTP/REST API ──▶ 同步服务调用 │ │
│ │ ├── app ↔ gitserver Git 元数据查询 │ │
│ │ └── app → 外部 AI 服务 OpenAI 兼容 API 调用 │ │
│ └──────────────────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────────────────────────────────┐ │
│ │ WebSocket ──▶ 客户端实时通信 │ │
│ │ ├── /ws 通用 WebSocket (多房间订阅) │ │
│ │ ├── /ws/rooms/{room_id} 房间级 WebSocket │ │
│ │ └── /ws/projects/{proj_id} 项目级 WebSocket │ │
│ └──────────────────────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────────────────────────────────┐ │
│ │ Kubernetes CRD + Operator ──▶ 基础设施编排 │ │
│ │ ├── apps.code.dev App CRD → Deployment + Service │ │
│ │ ├── gitservers.code.dev GitServer CRD → Deployment + Service + PVC │ │
│ │ ├── emailworkers.code.dev EmailWorker CRD → Deployment │ │
│ │ ├── githooks.code.dev GitHook CRD → Deployment + ConfigMap │ │
│ │ └── migrates.code.dev Migrate CRD → Job │ │
│ └──────────────────────────────────────────────────────────────────────────────────┘ │
└────────────────────────────────────────────────────────────────────────────────────────────┘
```
---
## 数据流详解
### 1. 聊天消息流程
```
客户端 A app 实例 1 Redis app 实例 2 客户端 B
│ │ │ │ │
│── WS 发送消息 ───────▶│ │ │ │
│ │── XADD ──────────────▶│ │ │
│ │ room:stream:{id} │ │ │
│ │── PUBLISH ────────────▶│ │ │
│ │ room:pub:{id} │ │ │
│ │ │── 事件通知 ────────────▶│ │
│ │ │ │── WS 推送 ────────────▶│
│◀─ ACK ───────────────│ │ │ │
│ │ │ │ │
│ │◀──── XREADGROUP ─────│ │ │
│ │ (room_worker) │ │ │
│ │── 写入 PostgreSQL ────│ │ │
```
### 2. Git Push 流程
```
客户端 gitserver Redis git-hook PostgreSQL
│ │ │ │ │
│── git push ────────▶│ │ │ │
│ (HTTP/SSH) │ │ │ │
│ │── git-receive-pack──▶│ │ │
│ │── LPUSH ────────────▶│ │ │
│ │ {hook}:sync │ │ │
│◀─ ACK ─────────────│ │ │ │
│ │ │── BRPOPLPUSH ─────▶│ │
│ │ │ │── 同步元数据 ────────▶│
│ │ │ │── 可选: fsck/gc ─────▶│
│ │ │◀── XACK ──────────│ │
```
### 3. 邮件发送流程
```
业务逻辑 app Redis email-worker SMTP
│ │ │ │ │
│── 触发邮件 ────────▶│ │ │ │
│ │── XADD ───────────▶│ │ │
│ │ email:stream │ │ │
│◀─ 返回 ───────────│ │ │ │
│ │ │── XREADGROUP ─────▶│ │
│ │ │ │── 渲染模板 ──────────▶│
│ │ │ │── SMTP 发送 ─────────▶│
│ │ │◀── XACK ──────────│ │
```
### 4. AI 聊天流程
```
客户端 app OpenAI API Qdrant PostgreSQL
│ │ │ │ │
│── AI 消息 ──────────▶│ │ │ │
│ │── 生成 Embedding ──▶│ │ │
│ │◀──── 向量 ──────────│ │ │
│ │── 存储向量 ─────────────────────────────▶│ │
│ │── 流式 Chat ─────────▶│ │ │
│◀─ Stream Chunk ──────│◀──── Stream ─────────│ │ │
│ │ │ │ │
│ │── 保存消息 ────────────────────────────────────────────────▶│
│ │── 检索相似消息 ────────────────────────▶│ │
│ │◀── 相似结果 ───────────────────────────│ │
```
---
## 技术栈汇总
### 后端技术栈
| 类别 | 技术 | 版本 |
|------|------|------|
| **语言** | Rust | Edition 2024 |
| **Web 框架** | Actix-web | 4.13.0 |
| **WebSocket** | Actix-ws | 0.4.0 |
| **ORM** | SeaORM | 2.0.0-rc.37 |
| **数据库** | PostgreSQL | - |
| **缓存/消息** | Redis | 1.1.0 |
| **向量库** | Qdrant | 1.17.0 |
| **Git** | git2 / russh | 0.20.0 / 0.55.0 |
| **邮件** | Lettre | 0.11.19 |
| **AI** | async-openai | 0.34.0 |
| **K8s** | kube-rs | 0.98 |
| **gRPC** | Tonic | 0.14.5 |
| **日志** | slog / tracing | 2.8 / 0.1.44 |
### 前端技术栈
| 类别 | 技术 | 版本 |
|------|------|------|
| **语言** | TypeScript | 5.9 |
| **框架** | React | 19.2 |
| **路由** | React Router | 7.13 |
| **构建** | Vite + SWC | 8.0 |
| **UI** | shadcn/ui + Tailwind | 4.11 / 4.2 |
| **状态** | TanStack Query | 5.96 |
| **HTTP** | Axios + OpenAPI 生成 | 1.7 |
| **Markdown** | react-markdown + Shiki | 10 / 1 |
| **拖拽** | dnd-kit | 6.3 |
---
## Docker 与 K8s 部署
```
┌──────────────────────────────────────────────────────────────────────────────┐
│ Docker 镜像 (6 个) │
│ │
│ docker/app.Dockerfile ──▶ apps/app 主应用镜像 │
│ docker/email-worker.Dockerfile ──▶ apps/email 邮件 Worker 镜像 │
│ docker/git-hook.Dockerfile ──▶ apps/git-hook Git Hook 镜像 │
│ docker/gitserver.Dockerfile ──▶ apps/gitserver Git Server 镜像 │
│ docker/migrate.Dockerfile ──▶ apps/migrate 数据库迁移镜像 │
│ docker/operator.Dockerfile ──▶ apps/operator K8s Operator 镜像 │
└──────────────────────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────────────────────┐
│ Kubernetes CRD (5 个) │
│ │
│ docker/crd/app-crd.yaml ──▶ apps.code.dev │
│ docker/crd/gitserver-crd.yaml ──▶ gitservers.code.dev │
│ docker/crd/email-worker-crd.yaml ──▶ emailworkers.code.dev │
│ docker/crd/git-hook-crd.yaml ──▶ githooks.code.dev │
│ docker/crd/migrate-crd.yaml ──▶ migrates.code.dev │
└──────────────────────────────────────────────────────────────────────────────┘
┌──────────────────────────────────────────────────────────────────────────────┐
│ K8s 部署配置 │
│ │
│ docker/operator/deployment.yaml ──▶ Operator Deployment │
│ docker/operator/example/ ──▶ CRD 使用示例 │
│ code-system.yaml │
└──────────────────────────────────────────────────────────────────────────────┘
```
---
## 关键设计特点
| 特点 | 描述 |
|------|------|
| **Monorepo 架构** | Rust workspace + 前端 monorepo统一管理 |
| **清晰分层** | 路由层 → 业务层 → 基础设施层 → 存储层,职责明确 |
| **异步优先** | 基于 Redis Streams 的异步消息处理 |
| **实时通信** | WebSocket + Redis Pub/Sub 实现多实例同步 |
| **K8s 原生** | Operator + 5 个 CRD 管理全生命周期 |
| **类型安全** | OpenAPI 自动生成 TypeScript 客户端 |
| **可扩展** | 服务独立部署,水平扩展 |
| **Git 兼容** | 完整支持 HTTP/SSH Git 协议 + LFS |
| **AI 集成** | 原生集成 OpenAI 兼容 API + 向量检索 |
| **92 个数据库实体** | 覆盖用户、项目、仓库、Issue、PR、聊天室、AI 等完整业务域 |

27
eslint.config.js Normal file
View File

@ -0,0 +1,27 @@
import js from '@eslint/js'
import globals from 'globals'
import reactHooks from 'eslint-plugin-react-hooks'
import reactRefresh from 'eslint-plugin-react-refresh'
import tseslint from 'typescript-eslint'
import {defineConfig, globalIgnores} from 'eslint/config'
export default defineConfig([
globalIgnores(['dist', 'src/client/**']),
{
files: ['**/*.{ts,tsx}'],
extends: [
js.configs.recommended,
tseslint.configs.recommended,
reactHooks.configs.flat.recommended,
reactRefresh.configs.vite,
],
rules: {
// Disable set-state-in-effect as it's a valid pattern for initializing form state from server data
'react-hooks/set-state-in-effect': 'off',
},
languageOptions: {
ecmaVersion: 2020,
globals: globals.browser,
},
},
])

13
index.html Normal file
View File

@ -0,0 +1,13 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8"/>
<link href="/logo.png" rel="icon" type="image/svg+xml"/>
<meta content="width=device-width, initial-scale=1.0" name="viewport"/>
<title>GitDataAi</title>
</head>
<body>
<div id="root"></div>
<script src="/src/main.tsx" type="module"></script>
</body>
</html>

View File

@ -0,0 +1,26 @@
[package]
name = "agent-tool-derive"
version.workspace = true
edition.workspace = true
authors.workspace = true
description.workspace = true
repository.workspace = true
readme.workspace = true
homepage.workspace = true
license.workspace = true
keywords.workspace = true
categories.workspace = true
documentation.workspace = true
[lib]
proc-macro = true
path = "src/lib.rs"
[dependencies]
syn = { version = "2", features = ["full", "extra-traits"] }
quote = "1"
proc-macro2 = "1"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
convert_case = "0.11"
futures = "0.3"

View File

@ -0,0 +1,373 @@
//! Procedural macro for generating tool definitions from functions.
//!
//! # Example
//!
//! ```
//! use agent_tool_derive::tool;
//!
//! #[tool(description = "Search issues by title")]
//! fn search_issues(
//! title: String,
//! status: Option<String>,
//! ) -> Result<Vec<serde_json::Value>, String> {
//! Ok(vec![])
//! }
//! ```
//!
//! Generates:
//! - A `SearchIssuesParameters` struct (serde Deserialize)
//! - A `SEARCH_ISSUES_DEFINITION: ToolDefinition` constant
//! - A `register_search_issues(registry: &mut ToolRegistry)` helper
extern crate proc_macro;
use convert_case::{Case, Casing};
use proc_macro::TokenStream;
use quote::{format_ident, quote};
use std::collections::HashMap;
use syn::punctuated::Punctuated;
use syn::{
Expr, ExprLit, Ident, Lit, Meta, ReturnType, Token, Type,
parse::{Parse, ParseStream},
};
/// Parse the attribute arguments: `description = "...", params(...), required(...)`
struct ToolArgs {
description: Option<String>,
param_descriptions: HashMap<String, String>,
required: Vec<String>,
}
impl Parse for ToolArgs {
fn parse(input: ParseStream) -> syn::Result<Self> {
Self::parse_from(input)
}
}
impl ToolArgs {
fn new() -> Self {
Self {
description: None,
param_descriptions: HashMap::new(),
required: Vec::new(),
}
}
fn parse_from(input: ParseStream) -> syn::Result<Self> {
let mut this = Self::new();
if input.is_empty() {
return Ok(this);
}
let meta_list: Punctuated<Meta, Token![,]> = Punctuated::parse_terminated(input)?;
for meta in meta_list {
match meta {
Meta::NameValue(nv) => {
let ident = nv
.path
.get_ident()
.ok_or_else(|| syn::Error::new_spanned(&nv.path, "expected identifier"))?;
if ident == "description" {
if let Expr::Lit(ExprLit {
lit: Lit::Str(s), ..
}) = nv.value
{
this.description = Some(s.value());
} else {
return Err(syn::Error::new_spanned(
&nv.value,
"description must be a string literal",
));
}
}
}
Meta::List(list) if list.path.is_ident("params") => {
let inner: Punctuated<Meta, Token![,]> =
list.parse_args_with(Punctuated::parse_terminated)?;
for item in inner {
if let Meta::NameValue(nv) = item {
let param_name = nv
.path
.get_ident()
.ok_or_else(|| {
syn::Error::new_spanned(&nv.path, "expected identifier")
})?
.to_string();
if let Expr::Lit(ExprLit {
lit: Lit::Str(s), ..
}) = nv.value
{
this.param_descriptions.insert(param_name, s.value());
}
}
}
}
Meta::List(list) if list.path.is_ident("required") => {
let required_vars: Punctuated<Ident, Token![,]> =
list.parse_args_with(Punctuated::parse_terminated)?;
for var in required_vars {
this.required.push(var.to_string());
}
}
_ => {}
}
}
Ok(this)
}
}
/// Map a Rust type to its JSON Schema type name.
fn json_type(ty: &Type) -> proc_macro2::TokenStream {
use syn::Type as T;
let segs = match ty {
T::Path(p) => &p.path.segments,
_ => return quote! { "type": "object" },
};
let last = segs.last().map(|s| &s.ident);
let args = segs.last().and_then(|s| {
if let syn::PathArguments::AngleBracketed(a) = &s.arguments {
Some(&a.args)
} else {
None
}
});
match (last.map(|i| i.to_string()).as_deref(), args) {
(Some("Vec" | "vec::Vec"), Some(args)) if !args.is_empty() => {
if let syn::GenericArgument::Type(inner) = &args[0] {
let inner_type = json_type(inner);
return quote! {
{
"type": "array",
"items": { #inner_type }
}
};
}
quote! { "type": "array" }
}
(Some("String" | "str" | "char"), _) => quote! { "type": "string" },
(Some("bool"), _) => quote! { "type": "boolean" },
(Some("i8" | "i16" | "i32" | "i64" | "isize"), _) => quote! { "type": "integer" },
(Some("u8" | "u16" | "u32" | "u64" | "usize"), _) => quote! { "type": "integer" },
(Some("f32" | "f64"), _) => quote! { "type": "number" },
_ => quote! { "type": "object" },
}
}
/// Extract return type info from `-> Result<T, E>`.
fn parse_return_type(
ret: &ReturnType,
) -> syn::Result<(proc_macro2::TokenStream, proc_macro2::TokenStream)> {
match ret {
ReturnType::Type(_, ty) => {
let ty = &**ty;
if let Type::Path(p) = ty {
let last = p
.path
.segments
.last()
.ok_or_else(|| syn::Error::new_spanned(&p.path, "invalid return type"))?;
if last.ident == "Result" {
if let syn::PathArguments::AngleBracketed(a) = &last.arguments {
let args = &a.args;
if args.len() == 2 {
let ok = &args[0];
let err = &args[1];
return Ok((quote!(#ok), quote!(#err)));
}
}
return Err(syn::Error::new_spanned(
&last,
"Result must have 2 type parameters",
));
}
}
Err(syn::Error::new_spanned(
ty,
"function must return Result<T, E>",
))
}
_ => Err(syn::Error::new_spanned(
ret,
"function must have a return type",
)),
}
}
/// The `#[tool]` attribute macro.
///
/// Usage:
/// ```
/// #[tool(description = "Tool description", params(
/// arg1 = "Description of arg1",
/// arg2 = "Description of arg2",
/// ))]
/// async fn my_tool(arg1: String, arg2: Option<i32>) -> Result<serde_json::Value, String> {
/// Ok(serde_json::json!({}))
/// }
/// ```
///
/// Generates:
/// - `MyToolParameters` struct with serde Deserialize
/// - `MY_TOOL_DEFINITION: ToolDefinition` constant
/// - `register_my_tool(registry: &mut ToolRegistry)` helper function
#[proc_macro_attribute]
pub fn tool(args: TokenStream, input: TokenStream) -> TokenStream {
let args = syn::parse_macro_input!(args as ToolArgs);
let input_fn = syn::parse_macro_input!(input as syn::ItemFn);
let fn_name = &input_fn.sig.ident;
let fn_name_str = fn_name.to_string();
let vis = &input_fn.vis;
let is_async = input_fn.sig.asyncness.is_some();
// Parse return type: Result<T, E>
let (_output_type, _error_type) = match parse_return_type(&input_fn.sig.output) {
Ok(t) => t,
Err(e) => return e.into_compile_error().into(),
};
// PascalCase struct name
let struct_name = format_ident!("{}", fn_name_str.to_case(Case::Pascal));
let params_struct_name = format_ident!("{}Parameters", struct_name);
let definition_const_name = format_ident!("{}_DEFINITION", fn_name_str.to_uppercase());
let register_fn_name = format_ident!("register_{}", fn_name_str);
// Extract parameters from function signature
let mut param_names: Vec<Ident> = Vec::new();
let mut param_types: Vec<Type> = Vec::new();
let mut param_json_types: Vec<proc_macro2::TokenStream> = Vec::new();
let mut param_descs: Vec<proc_macro2::TokenStream> = Vec::new();
let required_args = args.required.clone();
for arg in &input_fn.sig.inputs {
let syn::FnArg::Typed(pat_type) = arg else {
continue;
};
let syn::Pat::Ident(pat_ident) = &*pat_type.pat else {
continue;
};
let name = &pat_ident.ident;
let ty = &*pat_type.ty;
let name_str = name.to_string();
let desc = args
.param_descriptions
.get(&name_str)
.map(|s| quote! { #s.to_string() })
.unwrap_or_else(|| quote! { format!("Parameter {}", #name_str) });
param_names.push(format_ident!("{}", name.to_string()));
param_types.push(ty.clone());
param_json_types.push(json_type(ty));
param_descs.push(desc);
}
// Which params are required (not Option<T>)
let required: Vec<proc_macro2::TokenStream> = if required_args.is_empty() {
param_names
.iter()
.filter(|name| {
let name_str = name.to_string();
!args
.param_descriptions
.contains_key(&format!("{}_opt", name_str))
})
.map(|name| quote! { stringify!(#name) })
.collect()
} else {
required_args.iter().map(|s| quote! { #s }).collect()
};
// Tool description
let tool_description = args
.description
.map(|s| quote! { #s.to_string() })
.unwrap_or_else(|| quote! { format!("Function {}", #fn_name_str) });
// Call invocation (async vs sync)
let call_args = param_names.iter().map(|n| quote! { args.#n });
let fn_call = if is_async {
quote! { #fn_name(#(#call_args),*).await }
} else {
quote! { #fn_name(#(#call_args),*) }
};
let expanded = quote! {
// Parameters struct: deserialized from JSON args by serde
#[derive(serde::Deserialize)]
#vis struct #params_struct_name {
#(#vis #param_names: #param_types,)*
}
// Keep the original function unchanged
#input_fn
// Static ToolDefinition constant — register this with ToolRegistry
#vis const #definition_const_name: agent::ToolDefinition = agent::ToolDefinition {
name: #fn_name_str.to_string(),
description: Some(#tool_description),
parameters: Some(agent::ToolSchema {
schema_type: "object".to_string(),
properties: Some({
let mut map = std::collections::HashMap::new();
#({
map.insert(stringify!(#param_names).to_string(), agent::ToolParam {
name: stringify!(#param_names).to_string(),
param_type: {
let jt = #param_json_types;
jt.get("type")
.and_then(|v| v.as_str())
.unwrap_or("object")
.to_string()
},
description: Some(#param_descs),
required: true,
properties: None,
items: None,
});
})*
map
}),
required: Some(vec![#(#required.to_string()),*]),
}),
strict: false,
};
/// Registers this tool in the given registry.
///
/// Generated by `#[tool]` macro for function `#fn_name_str`.
#vis fn #register_fn_name(registry: &mut agent::ToolRegistry) {
let def = #definition_const_name.clone();
let fn_name = #fn_name_str.to_string();
registry.register_fn(fn_name, move |_ctx, args| {
let args: #params_struct_name = match serde_json::from_value(args) {
Ok(a) => a,
Err(e) => {
return std::pin::Pin::new(Box::new(async move {
Err(agent::ToolError::ParseError(e.to_string()))
}))
}
};
std::pin::Pin::new(Box::new(async move {
let result = #fn_call;
match result {
Ok(v) => Ok(serde_json::to_value(v).unwrap_or(serde_json::Value::Null)),
Err(e) => Err(agent::ToolError::ExecutionError(e.to_string())),
}
}))
});
}
};
// We need to use boxed futures for the return type.
// Since we can't add runtime dependencies to the proc-macro crate,
// we emit the .boxed() call and the caller must ensure
// `use futures::FutureExt;` or equivalent is in scope.
// The generated code requires: `futures::FutureExt` (for .boxed()).
// Re-emit with futures dependency note
TokenStream::from(expanded)
}

37
libs/agent/Cargo.toml Normal file
View File

@ -0,0 +1,37 @@
[package]
name = "agent"
version.workspace = true
edition.workspace = true
authors.workspace = true
description.workspace = true
repository.workspace = true
readme.workspace = true
homepage.workspace = true
license.workspace = true
keywords.workspace = true
categories.workspace = true
documentation.workspace = true
[lib]
path = "lib.rs"
name = "agent"
[dependencies]
async-openai = { version = "0.34.0", features = ["embedding", "chat-completion", "model"] }
tokio = { workspace = true }
async-trait = { workspace = true }
qdrant-client = { workspace = true }
sea-orm = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
thiserror = { workspace = true }
db = { workspace = true }
config = { path = "../config" }
models = { workspace = true }
chrono = { workspace = true }
uuid = { workspace = true }
futures = { workspace = true }
tiktoken-rs = { workspace = true }
agent-tool-derive = { path = "../agent-tool-derive" }
once_cell = { workspace = true }
regex = { workspace = true }
[lints]
workspace = true

200
libs/agent/chat/context.rs Normal file
View File

@ -0,0 +1,200 @@
use async_openai::types::chat::{
ChatCompletionRequestAssistantMessage, ChatCompletionRequestAssistantMessageContent,
ChatCompletionRequestDeveloperMessage, ChatCompletionRequestDeveloperMessageContent,
ChatCompletionRequestFunctionMessage, ChatCompletionRequestMessage,
ChatCompletionRequestSystemMessage, ChatCompletionRequestSystemMessageContent,
ChatCompletionRequestToolMessage, ChatCompletionRequestToolMessageContent,
ChatCompletionRequestUserMessage, ChatCompletionRequestUserMessageContent,
};
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::collections::HashMap;
use uuid::Uuid;
use crate::compact::MessageSummary;
use models::rooms::room_message::Model as RoomMessageModel;
/// Sender type for AI context, supporting all roles in the chat.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
pub enum AiContextSenderType {
/// Regular user message
User,
/// AI assistant message
Ai,
/// System message (e.g., summary, notification)
System,
/// Developer message (for system-level instructions)
Developer,
/// Tool call message
Function,
/// Tool result message
FunctionResult,
}
impl AiContextSenderType {
pub fn from_sender_type(sender_type: &models::rooms::MessageSenderType) -> Self {
match sender_type {
models::rooms::MessageSenderType::Member => Self::User,
models::rooms::MessageSenderType::Admin => Self::User,
models::rooms::MessageSenderType::Owner => Self::User,
models::rooms::MessageSenderType::Ai => Self::Ai,
models::rooms::MessageSenderType::System => Self::System,
models::rooms::MessageSenderType::Tool => Self::Function,
models::rooms::MessageSenderType::Guest => Self::User,
}
}
}
/// Room message context for AI processing.
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct RoomMessageContext {
pub uid: Uuid,
pub sender_type: AiContextSenderType,
pub sender_uid: Option<Uuid>,
pub sender_name: Option<String>,
pub content: String,
pub content_type: models::rooms::MessageContentType,
pub send_at: DateTime<Utc>,
/// Tool call ID for FunctionResult messages, used to associate tool results with their calls.
pub tool_call_id: Option<String>,
}
impl RoomMessageContext {
pub fn from_model(model: &RoomMessageModel, sender_name: Option<String>) -> Self {
Self {
uid: model.id,
sender_type: AiContextSenderType::from_sender_type(&model.sender_type),
sender_uid: model.sender_id,
sender_name,
content: model.content.clone(),
content_type: model.content_type.clone(),
send_at: model.send_at,
tool_call_id: Self::extract_tool_call_id(&model.content),
}
}
fn extract_tool_call_id(content: &str) -> Option<String> {
let content = content.trim();
if let Ok(v) = serde_json::from_str::<Value>(content) {
v.get("tool_call_id")
.and_then(|v| v.as_str())
.map(|s| s.to_string())
} else {
None
}
}
pub fn from_model_with_names(
model: &RoomMessageModel,
user_names: &HashMap<Uuid, String>,
) -> Self {
let sender_name = model
.sender_id
.and_then(|uid| user_names.get(&uid).cloned());
Self::from_model(model, sender_name)
}
pub fn to_message(&self) -> ChatCompletionRequestMessage {
match self.sender_type {
AiContextSenderType::User => {
ChatCompletionRequestMessage::User(ChatCompletionRequestUserMessage {
content: ChatCompletionRequestUserMessageContent::Text(self.display_content()),
name: self.sender_name.clone(),
})
}
AiContextSenderType::Ai => {
ChatCompletionRequestMessage::Assistant(ChatCompletionRequestAssistantMessage {
content: Some(ChatCompletionRequestAssistantMessageContent::Text(
self.display_content(),
)),
name: self.sender_name.clone(),
refusal: None,
audio: None,
tool_calls: None,
#[allow(deprecated)]
function_call: None,
})
}
AiContextSenderType::System => {
ChatCompletionRequestMessage::System(ChatCompletionRequestSystemMessage {
content: ChatCompletionRequestSystemMessageContent::Text(
self.display_content(),
),
name: self.sender_name.clone(),
})
}
AiContextSenderType::Developer => {
ChatCompletionRequestMessage::Developer(ChatCompletionRequestDeveloperMessage {
content: ChatCompletionRequestDeveloperMessageContent::Text(
self.display_content(),
),
name: self.sender_name.clone(),
})
}
AiContextSenderType::Function => {
ChatCompletionRequestMessage::Function(ChatCompletionRequestFunctionMessage {
content: Some(self.content.clone()),
name: self.display_content(), // Function name is stored in content
})
}
AiContextSenderType::FunctionResult => {
ChatCompletionRequestMessage::Tool(ChatCompletionRequestToolMessage {
content: ChatCompletionRequestToolMessageContent::Text(self.display_content()),
tool_call_id: self
.tool_call_id
.clone()
.unwrap_or_else(|| "unknown".to_string()),
})
}
}
}
fn display_content(&self) -> String {
let mut content = self.content.trim().to_string();
if content.is_empty() {
content = match self.content_type {
models::rooms::MessageContentType::Text => "[empty]".to_string(),
models::rooms::MessageContentType::Image => "[image]".to_string(),
models::rooms::MessageContentType::Audio => "[audio]".to_string(),
models::rooms::MessageContentType::Video => "[video]".to_string(),
models::rooms::MessageContentType::File => "[file]".to_string(),
};
}
if let Some(sender_name) = &self.sender_name {
content = format!("[{}] {}", sender_name, content);
}
content
}
}
impl From<&RoomMessageModel> for RoomMessageContext {
fn from(model: &RoomMessageModel) -> Self {
RoomMessageContext::from_model(model, None)
}
}
impl From<MessageSummary> for RoomMessageContext {
fn from(summary: MessageSummary) -> Self {
// Map MessageSenderType to AiContextSenderType
let sender_type = AiContextSenderType::from_sender_type(&summary.sender_type);
// For FunctionResult (tool results), ensure tool_call_id is set
let tool_call_id = if sender_type == AiContextSenderType::FunctionResult {
summary.tool_call_id
} else {
None
};
Self {
uid: summary.id,
sender_type,
sender_uid: summary.sender_id,
sender_name: Some(summary.sender_name),
content: summary.content,
content_type: summary.content_type,
send_at: summary.send_at,
tool_call_id,
}
}
}

61
libs/agent/chat/mod.rs Normal file
View File

@ -0,0 +1,61 @@
use std::pin::Pin;
use async_openai::types::chat::ChatCompletionTool;
use db::cache::AppCache;
use db::database::AppDatabase;
use models::agents::model;
use models::projects::project;
use models::repos::repo;
use models::rooms::{room, room_message};
use models::users::user;
use std::collections::HashMap;
use uuid::Uuid;
/// Maximum recursion rounds for tool-call loops (AI → tool → result → AI).
pub const DEFAULT_MAX_TOOL_DEPTH: usize = 3;
/// A single chunk from an AI streaming response.
#[derive(Debug, Clone)]
pub struct AiStreamChunk {
pub content: String,
pub done: bool,
}
/// Optional streaming callback: called for each token chunk.
pub type StreamCallback = Box<
dyn Fn(AiStreamChunk) -> Pin<Box<dyn std::future::Future<Output = ()> + Send>> + Send + Sync,
>;
pub struct AiRequest {
pub db: AppDatabase,
pub cache: AppCache,
pub model: model::Model,
pub project: project::Model,
pub sender: user::Model,
pub room: room::Model,
pub input: String,
pub mention: Vec<Mention>,
pub history: Vec<room_message::Model>,
/// Optional user name mapping: user_id -> username
pub user_names: HashMap<Uuid, String>,
pub temperature: f64,
pub max_tokens: i32,
pub top_p: f64,
pub frequency_penalty: f64,
pub presence_penalty: f64,
pub think: bool,
/// OpenAI tool definitions. If None or empty, tool calling is disabled.
pub tools: Option<Vec<ChatCompletionTool>>,
/// Maximum tool-call recursion depth (AI → tool → result → AI loops). Default: 3.
pub max_tool_depth: usize,
}
pub enum Mention {
User(user::Model),
Repo(repo::Model),
}
pub mod context;
pub mod service;
pub use context::{AiContextSenderType, RoomMessageContext};
pub use service::ChatService;

655
libs/agent/chat/service.rs Normal file
View File

@ -0,0 +1,655 @@
use async_openai::Client;
use async_openai::config::OpenAIConfig;
use async_openai::types::chat::{
ChatCompletionMessageToolCalls, ChatCompletionRequestAssistantMessage,
ChatCompletionRequestAssistantMessageContent, ChatCompletionRequestMessage,
ChatCompletionRequestSystemMessage, ChatCompletionRequestUserMessage, ChatCompletionTool,
ChatCompletionTools, CreateChatCompletionRequest, CreateChatCompletionResponse,
CreateChatCompletionStreamResponse, FinishReason, ReasoningEffort, ToolChoiceOptions,
};
use futures::StreamExt;
use models::projects::project_skill;
use models::rooms::room_ai;
use sea_orm::{ColumnTrait, EntityTrait, QueryFilter};
use uuid::Uuid;
use super::context::RoomMessageContext;
use super::{AiRequest, AiStreamChunk, Mention, StreamCallback};
use crate::compact::{CompactConfig, CompactService};
use crate::embed::EmbedService;
use crate::error::{AgentError, Result};
use crate::perception::{PerceptionService, SkillEntry, ToolCallEvent};
use crate::tool::{ToolCall, ToolContext, ToolExecutor};
/// Service for handling AI chat requests in rooms.
pub struct ChatService {
openai_client: Client<OpenAIConfig>,
compact_service: Option<CompactService>,
embed_service: Option<EmbedService>,
perception_service: PerceptionService,
}
impl ChatService {
pub fn new(openai_client: Client<OpenAIConfig>) -> Self {
Self {
openai_client,
compact_service: None,
embed_service: None,
perception_service: PerceptionService::default(),
}
}
pub fn with_compact_service(mut self, compact_service: CompactService) -> Self {
self.compact_service = Some(compact_service);
self
}
pub fn with_embed_service(mut self, embed_service: EmbedService) -> Self {
self.embed_service = Some(embed_service);
self
}
pub fn with_perception_service(mut self, perception_service: PerceptionService) -> Self {
self.perception_service = perception_service;
self
}
#[allow(deprecated)]
pub async fn process(&self, request: AiRequest) -> Result<String> {
let tools: Vec<ChatCompletionTool> = request.tools.clone().unwrap_or_default();
let tools_enabled = !tools.is_empty();
let tool_choice = tools_enabled.then(|| {
async_openai::types::chat::ChatCompletionToolChoiceOption::Mode(ToolChoiceOptions::Auto)
});
let think = request.think;
let max_tool_depth = request.max_tool_depth;
let top_p = request.top_p;
let frequency_penalty = request.frequency_penalty;
let presence_penalty = request.presence_penalty;
let temperature_f = request.temperature;
let max_tokens_i = request.max_tokens;
let mut messages = self.build_messages(&request).await?;
let room_ai = room_ai::Entity::find()
.filter(room_ai::Column::Room.eq(request.room.id))
.filter(room_ai::Column::Model.eq(request.model.id))
.one(&request.db)
.await?;
let model_name = request.model.name.clone();
let temperature = room_ai
.as_ref()
.and_then(|r| r.temperature.map(|v| v as f32))
.unwrap_or(temperature_f as f32);
let max_tokens = room_ai
.as_ref()
.and_then(|r| r.max_tokens.map(|v| v as u32))
.unwrap_or(max_tokens_i as u32);
let mut tool_depth = 0;
loop {
let req = CreateChatCompletionRequest {
model: model_name.clone(),
messages: messages.clone(),
temperature: Some(temperature),
max_completion_tokens: Some(max_tokens),
top_p: Some(top_p as f32),
frequency_penalty: Some(frequency_penalty as f32),
presence_penalty: Some(presence_penalty as f32),
stream: Some(false),
reasoning_effort: Some(if think {
ReasoningEffort::High
} else {
ReasoningEffort::None
}),
tools: if tools_enabled {
Some(
tools
.iter()
.map(|t| ChatCompletionTools::Function(t.clone()))
.collect(),
)
} else {
None
},
tool_choice: tool_choice.clone(),
..Default::default()
};
let response: CreateChatCompletionResponse = self
.openai_client
.chat()
.create(req)
.await
.map_err(|e| AgentError::OpenAi(e.to_string()))?;
let choice = response
.choices
.into_iter()
.next()
.ok_or_else(|| AgentError::Internal("no choice in response".into()))?;
if tools_enabled {
if let Some(ref tool_calls) = choice.message.tool_calls {
if !tool_calls.is_empty() {
messages.push(ChatCompletionRequestMessage::Assistant(
ChatCompletionRequestAssistantMessage {
content: choice
.message
.content
.clone()
.map(ChatCompletionRequestAssistantMessageContent::Text),
name: None,
refusal: None,
audio: None,
tool_calls: Some(tool_calls.clone()),
function_call: None,
},
));
let calls: Vec<ToolCall> = tool_calls
.iter()
.filter_map(|tc| {
if let ChatCompletionMessageToolCalls::Function(
async_openai::types::chat::ChatCompletionMessageToolCall {
id,
function,
},
) = tc
{
Some(ToolCall {
id: id.clone(),
name: function.name.clone(),
arguments: function.arguments.clone(),
})
} else {
None
}
})
.collect();
if !calls.is_empty() {
let tool_messages = self.execute_tool_calls(calls, &request).await?;
messages.extend(tool_messages);
tool_depth += 1;
if tool_depth >= max_tool_depth {
return Ok(String::new());
}
continue;
}
}
}
}
let text = choice.message.content.unwrap_or_default();
return Ok(text);
}
}
#[allow(deprecated)]
pub async fn process_stream(&self, request: AiRequest, on_chunk: StreamCallback) -> Result<()> {
let tools: Vec<ChatCompletionTool> = request.tools.clone().unwrap_or_default();
let tools_enabled = !tools.is_empty();
let tool_choice = tools_enabled.then(|| {
async_openai::types::chat::ChatCompletionToolChoiceOption::Mode(ToolChoiceOptions::Auto)
});
let think = request.think;
let max_tool_depth = request.max_tool_depth;
let top_p = request.top_p;
let frequency_penalty = request.frequency_penalty;
let presence_penalty = request.presence_penalty;
let temperature_f = request.temperature;
let max_tokens_i = request.max_tokens;
let mut messages = self.build_messages(&request).await?;
let room_ai = room_ai::Entity::find()
.filter(room_ai::Column::Room.eq(request.room.id))
.filter(room_ai::Column::Model.eq(request.model.id))
.one(&request.db)
.await?;
let model_name = request.model.name.clone();
let temperature = room_ai
.as_ref()
.and_then(|r| r.temperature.map(|v| v as f32))
.unwrap_or(temperature_f as f32);
let max_tokens = room_ai
.as_ref()
.and_then(|r| r.max_tokens.map(|v| v as u32))
.unwrap_or(max_tokens_i as u32);
let mut tool_depth = 0;
loop {
let req = CreateChatCompletionRequest {
model: model_name.clone(),
messages: messages.clone(),
temperature: Some(temperature),
max_completion_tokens: Some(max_tokens),
top_p: Some(top_p as f32),
frequency_penalty: Some(frequency_penalty as f32),
presence_penalty: Some(presence_penalty as f32),
stream: Some(true),
reasoning_effort: Some(if think {
ReasoningEffort::High
} else {
ReasoningEffort::None
}),
tools: if tools_enabled {
Some(
tools
.iter()
.map(|t| ChatCompletionTools::Function(t.clone()))
.collect(),
)
} else {
None
},
tool_choice: tool_choice.clone(),
..Default::default()
};
let mut stream = self
.openai_client
.chat()
.create_stream(req)
.await
.map_err(|e| AgentError::OpenAi(e.to_string()))?;
let mut text_accumulated = String::new();
let mut tool_call_chunks: Vec<ToolCallChunkAccum> = Vec::new();
let mut finish_reason: Option<FinishReason> = None;
while let Some(chunk_result) = stream.next().await {
let chunk: CreateChatCompletionStreamResponse =
chunk_result.map_err(|e| AgentError::OpenAi(e.to_string()))?;
let choice = match chunk.choices.first() {
Some(c) => c,
None => continue,
};
// Track finish reason
if let Some(ref fr) = choice.finish_reason {
finish_reason = Some(fr.clone());
}
// Text delta
if let Some(content) = &choice.delta.content {
text_accumulated.push_str(content);
on_chunk(AiStreamChunk {
content: text_accumulated.clone(),
done: false,
})
.await;
}
// Tool call deltas
if let Some(ref tool_chunks) = choice.delta.tool_calls {
for tc in tool_chunks {
let idx = tc.index as usize;
if tool_call_chunks.len() <= idx {
tool_call_chunks.resize(idx + 1, ToolCallChunkAccum::default());
}
if let Some(ref id) = tc.id {
tool_call_chunks[idx].id = Some(id.clone());
}
if let Some(ref fc) = tc.function {
if let Some(ref name) = fc.name {
tool_call_chunks[idx].name.push_str(name);
}
if let Some(ref args) = fc.arguments {
tool_call_chunks[idx].arguments.push_str(args);
}
}
}
}
}
let has_tool_calls = matches!(
finish_reason,
Some(FinishReason::ToolCalls) | Some(FinishReason::FunctionCall)
);
if has_tool_calls && tools_enabled {
// Send final text chunk
on_chunk(AiStreamChunk {
content: text_accumulated.clone(),
done: true,
})
.await;
// Build ToolCall list from accumulated chunks
let tool_calls: Vec<_> = tool_call_chunks
.into_iter()
.filter(|c| !c.name.is_empty())
.map(|c| ToolCall {
id: c.id.unwrap_or_else(|| Uuid::new_v4().to_string()),
name: c.name,
arguments: c.arguments,
})
.collect();
if !tool_calls.is_empty() {
// Append assistant message with tool calls to history
messages.push(ChatCompletionRequestMessage::Assistant(
ChatCompletionRequestAssistantMessage {
content: Some(
ChatCompletionRequestAssistantMessageContent::Text(
text_accumulated,
),
),
name: None,
refusal: None,
audio: None,
tool_calls: Some(
tool_calls
.iter()
.map(|tc| {
ChatCompletionMessageToolCalls::Function(
async_openai::types::chat::ChatCompletionMessageToolCall {
id: tc.id.clone(),
function: async_openai::types::chat::FunctionCall {
name: tc.name.clone(),
arguments: tc.arguments.clone(),
},
},
)
})
.collect(),
),
function_call: None,
},
));
let tool_messages = self.execute_tool_calls(tool_calls, &request).await?;
messages.extend(tool_messages);
tool_depth += 1;
if tool_depth >= max_tool_depth {
return Ok(());
}
continue;
}
}
on_chunk(AiStreamChunk {
content: text_accumulated,
done: true,
})
.await;
return Ok(());
}
}
/// Executes a batch of tool calls and returns the tool result messages.
async fn execute_tool_calls(
&self,
calls: Vec<ToolCall>,
request: &AiRequest,
) -> Result<Vec<ChatCompletionRequestMessage>> {
let mut ctx = ToolContext::new(
request.db.clone(),
request.cache.clone(),
request.room.id,
Some(request.sender.uid),
)
.with_project(request.project.id);
let executor = ToolExecutor::new();
let results = executor
.execute_batch(calls, &mut ctx)
.await
.map_err(|e| AgentError::Internal(e.to_string()))?;
Ok(ToolExecutor::to_tool_messages(&results))
}
async fn build_messages(
&self,
request: &AiRequest,
) -> Result<Vec<ChatCompletionRequestMessage>> {
let mut messages = Vec::new();
let mut processed_history = Vec::new();
if let Some(compact_service) = &self.compact_service {
// Auto-compact: only compresses when token count exceeds threshold
let config = CompactConfig::default();
match compact_service
.compact_room_auto(request.room.id, Some(request.user_names.clone()), config)
.await
{
Ok(compact_summary) => {
if !compact_summary.summary.is_empty() {
messages.push(ChatCompletionRequestMessage::System(
ChatCompletionRequestSystemMessage {
content: async_openai::types::chat::ChatCompletionRequestSystemMessageContent::Text(
format!("Conversation summary:\n{}", compact_summary.summary),
),
..Default::default()
},
));
}
processed_history = compact_summary.retained;
}
Err(e) => {
let _ = e;
}
}
}
if !processed_history.is_empty() {
for msg_summary in processed_history {
let ctx = RoomMessageContext::from(msg_summary);
messages.push(ctx.to_message());
}
} else {
for msg in &request.history {
let ctx = RoomMessageContext::from_model_with_names(msg, &request.user_names);
messages.push(ctx.to_message());
}
}
if let Some(embed_service) = &self.embed_service {
for mention in &request.mention {
match mention {
Mention::Repo(repo) => {
let query = format!(
"{} {}",
repo.repo_name,
repo.description.as_deref().unwrap_or_default()
);
match embed_service.search_issues(&query, 5).await {
Ok(issues) if !issues.is_empty() => {
let context = format!(
"Related issues:\n{}",
issues
.iter()
.map(|i| format!("- {}", i.payload.text))
.collect::<Vec<_>>()
.join("\n")
);
messages.push(ChatCompletionRequestMessage::System(
ChatCompletionRequestSystemMessage {
content: async_openai::types::chat::ChatCompletionRequestSystemMessageContent::Text(
context,
),
..Default::default()
},
));
}
Err(e) => {
let _ = e;
}
_ => {}
}
match embed_service.search_repos(&query, 3).await {
Ok(repos) if !repos.is_empty() => {
let context = format!(
"Related repositories:\n{}",
repos
.iter()
.map(|r| format!("- {}", r.payload.text))
.collect::<Vec<_>>()
.join("\n")
);
messages.push(ChatCompletionRequestMessage::System(
ChatCompletionRequestSystemMessage {
content: async_openai::types::chat::ChatCompletionRequestSystemMessageContent::Text(
context,
),
..Default::default()
},
));
}
Err(e) => {
let _ = e;
}
_ => {}
}
}
Mention::User(user) => {
let mut profile_parts = vec![format!("Username: {}", user.username)];
if let Some(ref display_name) = user.display_name {
profile_parts.push(format!("Display name: {}", display_name));
}
if let Some(ref org) = user.organization {
profile_parts.push(format!("Organization: {}", org));
}
if let Some(ref website) = user.website_url {
profile_parts.push(format!("Website: {}", website));
}
messages.push(ChatCompletionRequestMessage::System(
ChatCompletionRequestSystemMessage {
content: async_openai::types::chat::ChatCompletionRequestSystemMessageContent::Text(
format!("Mentioned user profile:\n{}", profile_parts.join("\n")),
),
..Default::default()
},
));
}
}
}
}
// Inject relevant skills via the perception system (auto + active + passive).
let skill_contexts = self.build_skill_context(request).await;
for ctx in skill_contexts {
messages.push(ctx.to_system_message() as ChatCompletionRequestMessage);
}
// Inject relevant past conversation memories via vector similarity.
let memories = self.build_memory_context(request).await;
for mem in memories {
messages.push(mem.to_system_message());
}
messages.push(ChatCompletionRequestMessage::User(
ChatCompletionRequestUserMessage {
content: async_openai::types::chat::ChatCompletionRequestUserMessageContent::Text(
request.input.clone(),
),
..Default::default()
},
));
Ok(messages)
}
/// Fetch enabled skills for the current project and run them through the
/// perception system (auto + active + passive) to inject relevant context.
async fn build_skill_context(
&self,
request: &AiRequest,
) -> Vec<crate::perception::SkillContext> {
// Fetch enabled skills for this project.
let skills: Vec<SkillEntry> = match project_skill::Entity::find()
.filter(project_skill::Column::ProjectUuid.eq(request.project.id))
.filter(project_skill::Column::Enabled.eq(true))
.all(&request.db)
.await
{
Ok(models) => models
.into_iter()
.map(|s| SkillEntry {
slug: s.slug,
name: s.name,
description: s.description,
content: s.content,
})
.collect(),
Err(_) => return Vec::new(),
};
if skills.is_empty() {
return Vec::new();
}
// Build history text for auto-awareness scoring.
let history_texts: Vec<String> = request
.history
.iter()
.rev()
.take(10)
.map(|msg| msg.content.clone())
.collect();
// Active + passive + auto perception (keyword-based).
let tool_events: Vec<ToolCallEvent> = Vec::new(); // Tool calls tracked in loop via process()
let keyword_skills = self
.perception_service
.inject_skills(&request.input, &history_texts, &tool_events, &skills)
.await;
// Vector-aware active perception: semantic search for skills via Qdrant.
let mut vector_skills = Vec::new();
if let Some(embed_service) = &self.embed_service {
let awareness = crate::perception::VectorActiveAwareness::default();
vector_skills = awareness
.detect(embed_service, &request.input, &request.project.id.to_string())
.await;
}
// Merge: deduplicate by label, preferring vector results (higher signal).
let mut seen = std::collections::HashSet::new();
let mut result = Vec::new();
for ctx in vector_skills {
if seen.insert(ctx.label.clone()) {
result.push(ctx);
}
}
for ctx in keyword_skills {
if seen.insert(ctx.label.clone()) {
result.push(ctx);
}
}
result
}
/// Inject relevant past conversation memories via vector similarity search.
async fn build_memory_context(
&self,
request: &AiRequest,
) -> Vec<crate::perception::vector::MemoryContext> {
let embed_service = match &self.embed_service {
Some(s) => s,
None => return Vec::new(),
};
// Search memories by current input semantic similarity.
let awareness = crate::perception::VectorPassiveAwareness::default();
awareness
.detect(embed_service, &request.input, &request.room.id.to_string())
.await
}
}
#[derive(Clone, Debug, Default)]
struct ToolCallChunkAccum {
id: Option<String>,
name: String,
arguments: String,
}

279
libs/agent/client.rs Normal file
View File

@ -0,0 +1,279 @@
//! Unified AI client with built-in retry, token tracking, and session recording.
//!
//! Provides a single entry point for all AI calls with:
//! - Exponential backoff with jitter (max 3 retries)
//! - Retryable error classification (429/500/502/503/504)
//! - Token usage tracking (input/output)
use async_openai::Client;
use async_openai::config::OpenAIConfig;
use async_openai::types::chat::{
ChatCompletionRequestMessage, ChatCompletionTool, ChatCompletionToolChoiceOption,
ChatCompletionTools, CreateChatCompletionRequest, CreateChatCompletionResponse,
};
use std::time::Instant;
use crate::error::{AgentError, Result};
/// Configuration for the AI client.
#[derive(Clone)]
pub struct AiClientConfig {
pub api_key: String,
pub base_url: Option<String>,
}
impl AiClientConfig {
pub fn new(api_key: String) -> Self {
Self {
api_key,
base_url: None,
}
}
pub fn with_base_url(mut self, base_url: impl Into<String>) -> Self {
self.base_url = Some(base_url.into());
self
}
pub fn build_client(&self) -> Client<OpenAIConfig> {
let mut config = OpenAIConfig::new().with_api_key(&self.api_key);
if let Some(ref url) = self.base_url {
config = config.with_api_base(url);
}
Client::with_config(config)
}
}
/// Response from an AI call, including usage statistics.
#[derive(Debug, Clone)]
pub struct AiCallResponse {
pub content: String,
pub input_tokens: i64,
pub output_tokens: i64,
pub latency_ms: i64,
}
impl AiCallResponse {
pub fn total_tokens(&self) -> i64 {
self.input_tokens + self.output_tokens
}
}
/// Internal state for retry tracking.
#[derive(Debug)]
struct RetryState {
attempt: u32,
max_retries: u32,
max_backoff_ms: u64,
}
impl RetryState {
fn new(max_retries: u32) -> Self {
Self {
attempt: 0,
max_retries,
max_backoff_ms: 5000,
}
}
fn should_retry(&self) -> bool {
self.attempt < self.max_retries
}
/// Calculate backoff duration with "full jitter" technique.
fn backoff_duration(&self) -> std::time::Duration {
let exp = self.attempt.min(5);
// base = 500 * 2^exp, capped at max_backoff_ms
let base_ms = 500u64
.saturating_mul(2u64.pow(exp))
.min(self.max_backoff_ms);
// jitter: random [0, base_ms/2]
let jitter = (fastrand_u64(base_ms / 2 + 1)) as u64;
std::time::Duration::from_millis(base_ms / 2 + jitter)
}
fn next(&mut self) {
self.attempt += 1;
}
}
/// Fast pseudo-random u64 using a simple LCG.
/// Good enough for jitter — not for cryptography.
fn fastrand_u64(n: u64) -> u64 {
use std::sync::atomic::{AtomicU64, Ordering};
static STATE: AtomicU64 = AtomicU64::new(0x193_667_6a_5e_7c_57);
if n <= 1 {
return 0;
}
let mut current = STATE.load(Ordering::Relaxed);
loop {
let new_val = current.wrapping_mul(6364136223846793005).wrapping_add(1);
match STATE.compare_exchange_weak(current, new_val, Ordering::Relaxed, Ordering::Relaxed) {
Ok(_) => return new_val % n,
Err(actual) => current = actual,
}
}
}
/// Determine if an error is retryable.
fn is_retryable_error(err: &async_openai::error::OpenAIError) -> bool {
use async_openai::error::OpenAIError;
match err {
// Network errors (DNS failure, connection refused, timeout) are always retryable
OpenAIError::Reqwest(_) => true,
// For API errors, check the error code string (e.g., "rate_limit_exceeded")
OpenAIError::ApiError(api_err) => api_err.code.as_ref().map_or(false, |code| {
matches!(
code.as_str(),
"rate_limit_exceeded"
| "internal_server_error"
| "service_unavailable"
| "gateway_timeout"
| "bad_gateway"
)
}),
_ => false,
}
}
/// Call the AI model with automatic retry.
pub async fn call_with_retry(
messages: &[ChatCompletionRequestMessage],
model: &str,
config: &AiClientConfig,
max_retries: Option<u32>,
) -> Result<AiCallResponse> {
let client = config.build_client();
let mut state = RetryState::new(max_retries.unwrap_or(3));
loop {
let start = Instant::now();
let req = CreateChatCompletionRequest {
model: model.to_string(),
messages: messages.to_vec(),
..Default::default()
};
let result = client.chat().create(req).await;
match result {
Ok(response) => {
let latency_ms = start.elapsed().as_millis() as i64;
let (input_tokens, output_tokens) = extract_usage(&response);
return Ok(AiCallResponse {
content: extract_content(&response),
input_tokens,
output_tokens,
latency_ms,
});
}
Err(err) => {
if state.should_retry() && is_retryable_error(&err) {
let duration = state.backoff_duration();
eprintln!(
"AI call failed (attempt {}/{}), retrying in {:?}",
state.attempt + 1,
state.max_retries,
duration
);
tokio::time::sleep(duration).await;
state.next();
continue;
}
return Err(AgentError::OpenAi(err.to_string()));
}
}
}
}
/// Call with custom parameters (temperature, max_tokens, optional tools).
pub async fn call_with_params(
messages: &[ChatCompletionRequestMessage],
model: &str,
config: &AiClientConfig,
temperature: f32,
max_tokens: u32,
max_retries: Option<u32>,
tools: Option<&[ChatCompletionTool]>,
) -> Result<AiCallResponse> {
let client = config.build_client();
let mut state = RetryState::new(max_retries.unwrap_or(3));
loop {
let start = Instant::now();
let req = CreateChatCompletionRequest {
model: model.to_string(),
messages: messages.to_vec(),
temperature: Some(temperature),
max_completion_tokens: Some(max_tokens),
tools: tools.map(|ts| {
ts.iter()
.map(|t| ChatCompletionTools::Function(t.clone()))
.collect()
}),
tool_choice: tools.filter(|ts| !ts.is_empty()).map(|_| {
ChatCompletionToolChoiceOption::Mode(
async_openai::types::chat::ToolChoiceOptions::Auto,
)
}),
..Default::default()
};
let result = client.chat().create(req).await;
match result {
Ok(response) => {
let latency_ms = start.elapsed().as_millis() as i64;
let (input_tokens, output_tokens) = extract_usage(&response);
return Ok(AiCallResponse {
content: extract_content(&response),
input_tokens,
output_tokens,
latency_ms,
});
}
Err(err) => {
if state.should_retry() && is_retryable_error(&err) {
let duration = state.backoff_duration();
eprintln!(
"AI call failed (attempt {}/{}), retrying in {:?}",
state.attempt + 1,
state.max_retries,
duration
);
tokio::time::sleep(duration).await;
state.next();
continue;
}
return Err(AgentError::OpenAi(err.to_string()));
}
}
}
}
/// Extract text content from a chat completion response.
fn extract_content(response: &CreateChatCompletionResponse) -> String {
response
.choices
.first()
.and_then(|c| c.message.content.clone())
.unwrap_or_default()
}
/// Extract usage (input/output tokens) from a response.
fn extract_usage(response: &CreateChatCompletionResponse) -> (i64, i64) {
response
.usage
.as_ref()
.map(|u| {
(
i64::try_from(u.prompt_tokens).unwrap_or(0),
i64::try_from(u.completion_tokens).unwrap_or(0),
)
})
.unwrap_or((0, 0))
}

View File

@ -0,0 +1,45 @@
use super::types::{CompactSummary, MessageSummary};
pub fn messages_to_text<F>(
messages: &[models::rooms::room_message::Model],
sender_mapper: F,
) -> String
where
F: Fn(&models::rooms::room_message::Model) -> String,
{
messages
.iter()
.map(|m| {
let sender = sender_mapper(m);
format!("[{}] {}: {}", m.send_at, sender, m.content)
})
.collect::<Vec<_>>()
.join("\n")
}
pub fn retained_as_text(retained: &[MessageSummary]) -> String {
retained
.iter()
.map(|m| format!("[{}] {}: {}", m.send_at, m.sender_name, m.content))
.collect::<Vec<_>>()
.join("\n")
}
pub fn summary_content(summary: &CompactSummary) -> String {
if summary.summary.is_empty() {
format!(
"## Recent conversation ({} messages)\n\n{}",
summary.retained.len(),
retained_as_text(&summary.retained)
)
} else {
format!(
"## Earlier conversation ({} messages summarised)\n{}\n\n\
## Most recent {} messages\n\n{}",
summary.messages_compressed,
summary.summary,
summary.retained.len(),
retained_as_text(&summary.retained)
)
}
}

View File

@ -0,0 +1,8 @@
//! Context compaction for AI sessions and room message history.
pub mod helpers;
pub mod service;
pub mod types;
pub use service::CompactService;
pub use types::{CompactConfig, CompactLevel, CompactSummary, MessageSummary, ThresholdResult};

View File

@ -0,0 +1,467 @@
use async_openai::Client;
use async_openai::config::OpenAIConfig;
use async_openai::types::chat::{
ChatCompletionRequestMessage, ChatCompletionRequestUserMessage, CreateChatCompletionRequest,
CreateChatCompletionResponse,
};
use chrono::Utc;
use models::ColumnTrait;
use models::rooms::room_message::{
Column as RmCol, Entity as RoomMessage, Model as RoomMessageModel,
};
use models::users::user::{Column as UserCol, Entity as User};
use sea_orm::{DatabaseConnection, EntityTrait, QueryFilter, QueryOrder};
use serde_json::Value;
use uuid::Uuid;
use crate::AgentError;
use crate::compact::helpers::summary_content;
use crate::compact::types::{
CompactConfig, CompactLevel, CompactSummary, MessageSummary, ThresholdResult,
};
use crate::tokent::{TokenUsage, resolve_usage};
#[derive(Clone)]
pub struct CompactService {
db: DatabaseConnection,
openai: Client<OpenAIConfig>,
model: String,
}
impl CompactService {
pub fn new(db: DatabaseConnection, openai: Client<OpenAIConfig>, model: String) -> Self {
Self { db, openai, model }
}
pub async fn compact_room(
&self,
room_id: Uuid,
level: CompactLevel,
user_names: Option<std::collections::HashMap<Uuid, String>>,
) -> Result<CompactSummary, AgentError> {
let messages = self.fetch_room_messages(room_id).await?;
let user_ids: Vec<Uuid> = messages
.iter()
.filter_map(|m| m.sender_id)
.collect::<std::collections::HashSet<_>>()
.into_iter()
.collect();
let user_name_map = match user_names {
Some(map) => map,
None => self.get_user_name_map(&user_ids).await?,
};
if messages.len() <= level.retain_count() {
let retained: Vec<MessageSummary> = messages
.iter()
.map(|m| Self::message_to_summary(m, &user_name_map))
.collect();
return Ok(CompactSummary {
session_id: Uuid::new_v4(),
room_id,
retained,
summary: String::new(),
compacted_at: Utc::now(),
messages_compressed: 0,
usage: None,
});
}
let retain_count = level.retain_count();
let split_index = messages.len().saturating_sub(retain_count);
let (to_summarize, retained_messages) = messages.split_at(split_index);
let retained: Vec<MessageSummary> = retained_messages
.iter()
.map(|m| Self::message_to_summary(m, &user_name_map))
.collect();
let (summary, remote_usage) = self.summarize_messages(to_summarize).await?;
// Build text of what was summarized (for tiktoken fallback)
let summarized_text = to_summarize
.iter()
.map(|m| m.content.as_str())
.collect::<Vec<_>>()
.join("\n");
let usage = resolve_usage(remote_usage, &self.model, &summarized_text, &summary);
Ok(CompactSummary {
session_id: Uuid::new_v4(),
room_id,
retained,
summary,
compacted_at: Utc::now(),
messages_compressed: to_summarize.len(),
usage: Some(usage),
})
}
pub async fn compact_session(
&self,
session_id: Uuid,
level: CompactLevel,
user_names: Option<std::collections::HashMap<Uuid, String>>,
) -> Result<CompactSummary, AgentError> {
let messages: Vec<RoomMessageModel> = RoomMessage::find()
.filter(RmCol::Room.eq(session_id))
.order_by_asc(RmCol::Seq)
.all(&self.db)
.await
.map_err(|e| AgentError::Internal(e.to_string()))?;
if messages.is_empty() {
return Err(AgentError::Internal("session has no messages".into()));
}
let user_ids: Vec<Uuid> = messages
.iter()
.filter_map(|m| m.sender_id)
.collect::<std::collections::HashSet<_>>()
.into_iter()
.collect();
let user_name_map = match user_names {
Some(map) => map,
None => self.get_user_name_map(&user_ids).await?,
};
if messages.len() <= level.retain_count() {
let retained: Vec<MessageSummary> = messages
.iter()
.map(|m| Self::message_to_summary(m, &user_name_map))
.collect();
return Ok(CompactSummary {
session_id,
room_id: Uuid::nil(),
retained,
summary: String::new(),
compacted_at: Utc::now(),
messages_compressed: 0,
usage: None,
});
}
let retain_count = level.retain_count();
let split_index = messages.len().saturating_sub(retain_count);
let (to_summarize, retained_messages) = messages.split_at(split_index);
let retained: Vec<MessageSummary> = retained_messages
.iter()
.map(|m| Self::message_to_summary(m, &user_name_map))
.collect();
// Summarize the earlier messages
let (summary, remote_usage) = self.summarize_messages(to_summarize).await?;
// Build text of what was summarized (for tiktoken fallback)
let summarized_text = to_summarize
.iter()
.map(|m| m.content.as_str())
.collect::<Vec<_>>()
.join("\n");
let usage = resolve_usage(remote_usage, &self.model, &summarized_text, &summary);
Ok(CompactSummary {
session_id,
room_id: Uuid::nil(),
retained,
summary,
compacted_at: Utc::now(),
messages_compressed: to_summarize.len(),
usage: Some(usage),
})
}
pub fn summary_as_system_message(summary: &CompactSummary) -> ChatCompletionRequestMessage {
let content = summary_content(summary);
ChatCompletionRequestMessage::System(
async_openai::types::chat::ChatCompletionRequestSystemMessage {
content: async_openai::types::chat::ChatCompletionRequestSystemMessageContent::Text(
content,
),
..Default::default()
},
)
}
/// Check if the message history for a room exceeds the token threshold.
/// Returns `ThresholdResult::Skip` if below threshold, `Compact` if above.
///
/// This method fetches messages and estimates their token count using tiktoken.
/// Call this before deciding whether to run full compaction.
pub async fn check_threshold(
&self,
room_id: Uuid,
config: CompactConfig,
) -> Result<ThresholdResult, AgentError> {
let messages = self.fetch_room_messages(room_id).await?;
let tokens = self.estimate_message_tokens(&messages);
if tokens < config.token_threshold {
return Ok(ThresholdResult::Skip {
estimated_tokens: tokens,
});
}
let level = if config.auto_level {
CompactLevel::auto_select(tokens, config.token_threshold)
} else {
config.default_level
};
Ok(ThresholdResult::Compact {
estimated_tokens: tokens,
level,
})
}
/// Auto-compact a room: estimates token count, only compresses if over threshold.
///
/// This is the recommended entry point for automatic compaction.
/// - If tokens < threshold → returns a no-op summary (empty summary, no compression)
/// - If tokens >= threshold → compresses with auto-selected level
pub async fn compact_room_auto(
&self,
room_id: Uuid,
user_names: Option<std::collections::HashMap<Uuid, String>>,
config: CompactConfig,
) -> Result<CompactSummary, AgentError> {
let threshold_result = self.check_threshold(room_id, config).await?;
match threshold_result {
ThresholdResult::Skip { .. } => {
// Below threshold — no compaction needed, return empty summary
let messages = self.fetch_room_messages(room_id).await?;
let user_ids: Vec<Uuid> = messages.iter().filter_map(|m| m.sender_id).collect();
let user_name_map = match user_names {
Some(map) => map,
None => self.get_user_name_map(&user_ids).await?,
};
let retained: Vec<MessageSummary> = messages
.iter()
.map(|m| Self::message_to_summary(m, &user_name_map))
.collect();
return Ok(CompactSummary {
session_id: Uuid::new_v4(),
room_id,
retained,
summary: String::new(),
compacted_at: Utc::now(),
messages_compressed: 0,
usage: None,
});
}
ThresholdResult::Compact { level, .. } => {
// Above threshold — compress with selected level
return self
.compact_room_with_level(room_id, level, user_names)
.await;
}
}
}
/// Compact a room with a specific level (bypassing threshold check).
/// Use this when the caller has already decided compaction is needed.
async fn compact_room_with_level(
&self,
room_id: Uuid,
level: CompactLevel,
user_names: Option<std::collections::HashMap<Uuid, String>>,
) -> Result<CompactSummary, AgentError> {
let messages = self.fetch_room_messages(room_id).await?;
let user_ids: Vec<Uuid> = messages.iter().filter_map(|m| m.sender_id).collect();
let user_name_map = match user_names {
Some(map) => map,
None => self.get_user_name_map(&user_ids).await?,
};
if messages.len() <= level.retain_count() {
let retained: Vec<MessageSummary> = messages
.iter()
.map(|m| Self::message_to_summary(m, &user_name_map))
.collect();
return Ok(CompactSummary {
session_id: Uuid::new_v4(),
room_id,
retained,
summary: String::new(),
compacted_at: Utc::now(),
messages_compressed: 0,
usage: None,
});
}
let retain_count = level.retain_count();
let split_index = messages.len().saturating_sub(retain_count);
let (to_summarize, retained_messages) = messages.split_at(split_index);
let retained: Vec<MessageSummary> = retained_messages
.iter()
.map(|m| Self::message_to_summary(m, &user_name_map))
.collect();
let (summary, remote_usage) = self.summarize_messages(to_summarize).await?;
let summarized_text = to_summarize
.iter()
.map(|m| m.content.as_str())
.collect::<Vec<_>>()
.join("\n");
let usage = resolve_usage(remote_usage, &self.model, &summarized_text, &summary);
Ok(CompactSummary {
session_id: Uuid::new_v4(),
room_id,
retained,
summary,
compacted_at: Utc::now(),
messages_compressed: to_summarize.len(),
usage: Some(usage),
})
}
/// Estimate total token count of a message list using tiktoken.
fn estimate_message_tokens(&self, messages: &[RoomMessageModel]) -> usize {
let total_chars: usize = messages.iter().map(|m| m.content.len()).sum();
// Rough estimate: ~4 chars per token (safe upper bound)
total_chars / 4
}
fn message_to_summary(
m: &RoomMessageModel,
user_name_map: &std::collections::HashMap<Uuid, String>,
) -> MessageSummary {
let sender_name = m
.sender_id
.and_then(|id| user_name_map.get(&id).cloned())
.unwrap_or_else(|| m.sender_type.to_string());
MessageSummary {
id: m.id,
sender_type: m.sender_type.clone(),
sender_id: m.sender_id,
sender_name,
content: m.content.clone(),
content_type: m.content_type.clone(),
tool_call_id: Self::extract_tool_call_id(&m.content),
send_at: m.send_at,
}
}
fn extract_tool_call_id(content: &str) -> Option<String> {
let content = content.trim();
if let Ok(v) = serde_json::from_str::<Value>(content) {
v.get("tool_call_id")
.and_then(|v| v.as_str())
.map(|s| s.to_string())
} else {
None
}
}
async fn fetch_room_messages(
&self,
room_id: Uuid,
) -> Result<Vec<RoomMessageModel>, AgentError> {
let messages: Vec<RoomMessageModel> = RoomMessage::find()
.filter(RmCol::Room.eq(room_id))
.order_by_asc(RmCol::Seq)
.all(&self.db)
.await
.map_err(|e| AgentError::Internal(e.to_string()))?;
Ok(messages)
}
async fn get_user_name_map(
&self,
user_ids: &[Uuid],
) -> Result<std::collections::HashMap<Uuid, String>, AgentError> {
use std::collections::HashMap;
let mut map = HashMap::new();
if !user_ids.is_empty() {
let users = User::find()
.filter(UserCol::Uid.is_in(user_ids.to_vec()))
.all(&self.db)
.await
.map_err(|e| AgentError::Internal(e.to_string()))?;
for user in users {
map.insert(user.uid, user.username);
}
}
Ok(map)
}
async fn summarize_messages(
&self,
messages: &[RoomMessageModel],
) -> Result<(String, Option<TokenUsage>), AgentError> {
// Collect distinct user IDs
let user_ids: Vec<Uuid> = messages
.iter()
.filter_map(|m| m.sender_id)
.collect::<std::collections::HashSet<_>>()
.into_iter()
.collect();
// Query usernames
let user_name_map = self.get_user_name_map(&user_ids).await?;
// Define sender mapper
let sender_mapper = |m: &RoomMessageModel| {
if let Some(user_id) = m.sender_id {
if let Some(username) = user_name_map.get(&user_id) {
return username.clone();
}
}
m.sender_type.to_string()
};
let body = crate::compact::helpers::messages_to_text(messages, sender_mapper);
let user_msg = ChatCompletionRequestMessage::User(ChatCompletionRequestUserMessage {
content: async_openai::types::chat::ChatCompletionRequestUserMessageContent::Text(
format!(
"Summarise the following conversation concisely, preserving all key facts, \
decisions, and any pending or in-progress work. \
Use this format:\n\n\
**Summary:** <one-paragraph overview>\n\
**Key decisions:** <bullet list or 'none'>\n\
**Open items:** <bullet list or 'none'>\n\n\
Conversation:\n\n{}",
body
),
),
..Default::default()
});
let request = CreateChatCompletionRequest {
model: self.model.clone(),
messages: vec![user_msg],
stream: Some(false),
..Default::default()
};
let response: CreateChatCompletionResponse = self
.openai
.chat()
.create(request)
.await
.map_err(|e| AgentError::OpenAi(e.to_string()))?;
let text = response
.choices
.first()
.and_then(|c| c.message.content.clone())
.unwrap_or_default();
// Prefer remote usage; fall back to None (caller will use tiktoken via resolve_usage)
let remote_usage = response
.usage
.as_ref()
.and_then(|u| TokenUsage::from_remote(u.prompt_tokens, u.completion_tokens));
Ok((text, remote_usage))
}
}

130
libs/agent/compact/types.rs Normal file
View File

@ -0,0 +1,130 @@
use chrono::{DateTime, Utc};
use models::rooms::{
MessageContentType, MessageSenderType, room_message::Model as RoomMessageModel,
};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use uuid::Uuid;
use crate::tokent::TokenUsage;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct MessageSummary {
pub id: Uuid,
pub sender_type: MessageSenderType,
pub sender_id: Option<Uuid>,
pub sender_name: String,
pub content: String,
pub content_type: MessageContentType,
/// Tool call ID extracted from message content JSON, if present.
pub tool_call_id: Option<String>,
pub send_at: DateTime<Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CompactSummary {
pub session_id: Uuid,
pub room_id: Uuid,
pub retained: Vec<MessageSummary>,
pub summary: String,
pub compacted_at: DateTime<Utc>,
pub messages_compressed: usize,
/// Token usage for the compaction AI call. `None` if usage data was unavailable.
pub usage: Option<TokenUsage>,
}
#[derive(Debug, Clone, Copy)]
pub enum CompactLevel {
Light,
Aggressive,
}
impl CompactLevel {
pub fn retain_count(&self) -> usize {
match self {
CompactLevel::Light => 5,
CompactLevel::Aggressive => 2,
}
}
/// Auto-select level based on estimated token count and config.
///
/// - `Light` (retain 5): when tokens are moderately over threshold
/// - `Aggressive` (retain 2): when tokens are severely over threshold (2x+)
pub fn auto_select(estimated_tokens: usize, threshold: usize) -> Self {
if threshold == 0 {
return CompactLevel::Light;
}
if estimated_tokens >= threshold * 2 {
CompactLevel::Aggressive
} else {
CompactLevel::Light
}
}
}
/// Configuration for automatic compaction.
#[derive(Debug, Clone, Copy)]
pub struct CompactConfig {
/// Only trigger compaction when estimated token count exceeds this.
/// Set to 0 to disable threshold (always compact when messages > retain_count).
pub token_threshold: usize,
/// If true, auto-select level based on how far over the threshold we are.
/// If false, always use `default_level`.
pub auto_level: bool,
/// Fallback level when `auto_level` is false.
pub default_level: CompactLevel,
}
impl Default for CompactConfig {
fn default() -> Self {
// Trigger when estimated tokens exceed ~8k (reasonable for a context window)
Self {
token_threshold: 8000,
auto_level: true,
default_level: CompactLevel::Light,
}
}
}
/// Result of a threshold check before deciding whether to compact.
#[derive(Debug)]
pub enum ThresholdResult {
/// Token count is below threshold — skip compaction.
Skip { estimated_tokens: usize },
/// Token count exceeds threshold — compact with this level.
Compact {
estimated_tokens: usize,
level: CompactLevel,
},
}
impl From<RoomMessageModel> for MessageSummary {
fn from(m: RoomMessageModel) -> Self {
let sender_type = m.sender_type.clone();
let content = m.content.clone();
Self {
id: m.id,
sender_type: sender_type.clone(),
sender_id: m.sender_id,
sender_name: sender_type.to_string(),
content,
content_type: m.content_type.clone(),
tool_call_id: Self::extract_tool_call_id(&m.content),
send_at: m.send_at,
}
}
}
impl MessageSummary {
fn extract_tool_call_id(content: &str) -> Option<String> {
let content = content.trim();
if let Ok(v) = serde_json::from_str::<Value>(content) {
v.get("tool_call_id")
.and_then(|v| v.as_str())
.map(|s| s.to_string())
} else {
None
}
}
}

209
libs/agent/embed/client.rs Normal file
View File

@ -0,0 +1,209 @@
use async_openai::Client;
use async_openai::types::embeddings::CreateEmbeddingRequestArgs;
use serde::{Deserialize, Serialize};
use crate::embed::qdrant::QdrantClient;
pub struct EmbedClient {
openai: Client<async_openai::config::OpenAIConfig>,
qdrant: QdrantClient,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EmbedVector {
pub id: String,
pub vector: Vec<f32>,
pub payload: EmbedPayload,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EmbedPayload {
pub entity_type: String,
pub entity_id: String,
pub text: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub extra: Option<serde_json::Value>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SearchResult {
pub id: String,
pub score: f32,
pub payload: EmbedPayload,
}
impl EmbedClient {
pub fn new(openai: Client<async_openai::config::OpenAIConfig>, qdrant: QdrantClient) -> Self {
Self { openai, qdrant }
}
pub async fn embed_text(&self, text: &str, model: &str) -> crate::Result<Vec<f32>> {
let request = CreateEmbeddingRequestArgs::default()
.model(model)
.input(text)
.build()
.map_err(|e| crate::AgentError::OpenAi(e.to_string()))?;
let response = self
.openai
.embeddings()
.create(request)
.await
.map_err(|e| crate::AgentError::OpenAi(e.to_string()))?;
response
.data
.first()
.map(|d| d.embedding.clone())
.ok_or_else(|| crate::AgentError::OpenAi("no embedding returned".into()))
}
pub async fn embed_batch(&self, texts: &[String], model: &str) -> crate::Result<Vec<Vec<f32>>> {
let request = CreateEmbeddingRequestArgs::default()
.model(model)
.input(texts.to_vec())
.build()
.map_err(|e| crate::AgentError::OpenAi(e.to_string()))?;
let response = self
.openai
.embeddings()
.create(request)
.await
.map_err(|e| crate::AgentError::OpenAi(e.to_string()))?;
let mut embeddings = vec![Vec::new(); texts.len()];
for data in response.data {
if (data.index as usize) < embeddings.len() {
embeddings[data.index as usize] = data.embedding;
}
}
Ok(embeddings)
}
pub async fn upsert(&self, points: Vec<EmbedVector>) -> crate::Result<()> {
self.qdrant.upsert_points(points).await
}
pub async fn search(
&self,
query: &str,
entity_type: &str,
model: &str,
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
let vector = self.embed_text(query, model).await?;
self.qdrant.search(&vector, entity_type, limit).await
}
pub async fn search_with_filter(
&self,
query: &str,
entity_type: &str,
model: &str,
limit: usize,
filter: qdrant_client::qdrant::Filter,
) -> crate::Result<Vec<SearchResult>> {
let vector = self.embed_text(query, model).await?;
self.qdrant
.search_with_filter(&vector, entity_type, limit, filter)
.await
}
pub async fn delete_by_entity_id(
&self,
entity_type: &str,
entity_id: &str,
) -> crate::Result<()> {
self.qdrant.delete_by_filter(entity_type, entity_id).await
}
pub async fn ensure_collection(&self, entity_type: &str, dimensions: u64) -> crate::Result<()> {
self.qdrant.ensure_collection(entity_type, dimensions).await
}
pub async fn ensure_memory_collection(&self, dimensions: u64) -> crate::Result<()> {
self.qdrant.ensure_memory_collection(dimensions).await
}
pub async fn ensure_skill_collection(&self, dimensions: u64) -> crate::Result<()> {
self.qdrant.ensure_skill_collection(dimensions).await
}
/// Embed and store a conversation memory (message) in Qdrant.
pub async fn embed_memory(
&self,
id: &str,
text: &str,
room_id: &str,
user_id: Option<&str>,
) -> crate::Result<()> {
let vector = self.embed_text(text, "").await?;
let point = EmbedVector {
id: id.to_string(),
vector,
payload: EmbedPayload {
entity_type: "memory".to_string(),
entity_id: room_id.to_string(),
text: text.to_string(),
extra: serde_json::json!({ "user_id": user_id }).into(),
},
};
self.qdrant.upsert_points(vec![point]).await
}
/// Search memory embeddings by semantic similarity within a room.
pub async fn search_memories(
&self,
query: &str,
model: &str,
room_id: &str,
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
let vector = self.embed_text(query, model).await?;
let mut results = self.qdrant.search_memory(&vector, limit + 1).await?;
// Filter to the specific room
results.retain(|r| r.payload.entity_id == room_id);
results.truncate(limit);
Ok(results)
}
/// Embed and store a skill in Qdrant.
pub async fn embed_skill(
&self,
id: &str,
name: &str,
description: &str,
content: &str,
project_uuid: &str,
) -> crate::Result<()> {
let text = format!("{}: {} {}", name, description, content);
let vector = self.embed_text(&text, "").await?;
let point = EmbedVector {
id: id.to_string(),
vector,
payload: EmbedPayload {
entity_type: "skill".to_string(),
entity_id: project_uuid.to_string(),
text,
extra: serde_json::json!({ "name": name, "description": description }).into(),
},
};
self.qdrant.upsert_points(vec![point]).await
}
/// Search skill embeddings by semantic similarity within a project.
pub async fn search_skills(
&self,
query: &str,
model: &str,
project_uuid: &str,
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
let vector = self.embed_text(query, model).await?;
let mut results = self.qdrant.search_skill(&vector, limit + 1).await?;
results.retain(|r| r.payload.entity_id == project_uuid);
results.truncate(limit);
Ok(results)
}
}

30
libs/agent/embed/mod.rs Normal file
View File

@ -0,0 +1,30 @@
pub mod client;
pub mod qdrant;
pub mod service;
use async_openai::config::OpenAIConfig;
pub use client::{EmbedClient, EmbedPayload, EmbedVector, SearchResult};
pub use qdrant::QdrantClient;
pub use service::{EmbedService, Embeddable};
pub async fn new_embed_client(config: &config::AppConfig) -> crate::Result<EmbedClient> {
let base_url = config
.get_embed_model_base_url()
.map_err(|e| crate::AgentError::Internal(e.to_string()))?;
let api_key = config
.get_embed_model_api_key()
.map_err(|e| crate::AgentError::Internal(e.to_string()))?;
let qdrant_url = config
.get_qdrant_url()
.map_err(|e| crate::AgentError::Internal(e.to_string()))?;
let qdrant_api_key = config.get_qdrant_api_key();
let openai = async_openai::Client::with_config(
OpenAIConfig::new()
.with_api_base(base_url)
.with_api_key(api_key),
);
let qdrant = QdrantClient::new(&qdrant_url, qdrant_api_key.as_deref()).await?;
Ok(EmbedClient::new(openai, qdrant))
}

312
libs/agent/embed/qdrant.rs Normal file
View File

@ -0,0 +1,312 @@
use qdrant_client::Qdrant;
use qdrant_client::qdrant::{
Condition, CreateCollectionBuilder, DeletePointsBuilder, Distance, FieldCondition, Filter,
Match, PointStruct, SearchPointsBuilder, UpsertPointsBuilder, VectorParamsBuilder, Vectors,
condition::ConditionOneOf, r#match::MatchValue, point_id::PointIdOptions, value,
};
use std::collections::HashMap;
use std::sync::Arc;
use super::client::{EmbedPayload, SearchResult};
use crate::embed::client::EmbedVector;
pub struct QdrantClient {
inner: Arc<Qdrant>,
}
impl Clone for QdrantClient {
fn clone(&self) -> Self {
Self {
inner: self.inner.clone(),
}
}
}
impl QdrantClient {
pub async fn new(url: &str, api_key: Option<&str>) -> crate::Result<Self> {
let mut builder = Qdrant::from_url(url);
if let Some(key) = api_key {
builder = builder.api_key(key);
}
let inner = builder
.build()
.map_err(|e| crate::AgentError::Qdrant(e.to_string()))?;
Ok(Self {
inner: Arc::new(inner),
})
}
fn collection_name(entity_type: &str) -> String {
format!("embed_{}", entity_type)
}
pub async fn ensure_collection(&self, entity_type: &str, dimensions: u64) -> crate::Result<()> {
let name = Self::collection_name(entity_type);
let exists = self
.inner
.collection_exists(&name)
.await
.map_err(|e| crate::AgentError::Qdrant(e.to_string()))?;
if exists {
return Ok(());
}
let create_collection = CreateCollectionBuilder::new(name)
.vectors_config(VectorParamsBuilder::new(dimensions, Distance::Cosine))
.build();
self.inner
.create_collection(create_collection)
.await
.map_err(|e| crate::AgentError::Qdrant(e.to_string()))?;
Ok(())
}
pub async fn upsert_points(&self, points: Vec<EmbedVector>) -> crate::Result<()> {
if points.is_empty() {
return Ok(());
}
let collection_name = Self::collection_name(&points[0].payload.entity_type);
let qdrant_points: Vec<PointStruct> = points
.into_iter()
.map(|p| {
let mut payload: HashMap<String, qdrant_client::qdrant::Value> = HashMap::new();
payload.insert("entity_type".to_string(), p.payload.entity_type.into());
payload.insert("entity_id".to_string(), p.payload.entity_id.into());
payload.insert("text".to_string(), p.payload.text.into());
if let Some(extra) = p.payload.extra {
let extra_str = serde_json::to_string(&extra).unwrap_or_default();
payload.insert(
"extra".to_string(),
qdrant_client::qdrant::Value {
kind: Some(
qdrant_client::qdrant::value::Kind::StringValue(extra_str),
),
},
);
}
PointStruct::new(p.id, Vectors::from(p.vector), payload)
})
.collect();
let upsert = UpsertPointsBuilder::new(collection_name, qdrant_points).build();
self.inner
.upsert_points(upsert)
.await
.map_err(|e| crate::AgentError::Qdrant(e.to_string()))?;
Ok(())
}
fn extract_string(value: &qdrant_client::qdrant::Value) -> String {
match &value.kind {
Some(value::Kind::StringValue(s)) => s.clone(),
_ => String::new(),
}
}
pub async fn search(
&self,
vector: &[f32],
entity_type: &str,
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
let collection_name = Self::collection_name(entity_type);
let search = SearchPointsBuilder::new(collection_name, vector.to_vec(), limit as u64)
.with_payload(true)
.build();
let results = self
.inner
.search_points(search)
.await
.map_err(|e| crate::AgentError::Qdrant(e.to_string()))?;
Ok(results
.result
.into_iter()
.filter_map(|p| {
let entity_type = p
.payload
.get(&"entity_type".to_string())
.map(Self::extract_string)
.unwrap_or_default();
let entity_id = p
.payload
.get(&"entity_id".to_string())
.map(Self::extract_string)
.unwrap_or_default();
let text = p
.payload
.get(&"text".to_string())
.map(Self::extract_string)
.unwrap_or_default();
let extra = p
.payload
.get(&"extra".to_string())
.and_then(|v| Some(Self::extract_string(v)))
.and_then(|s| serde_json::from_str::<serde_json::Value>(&s).ok());
let id =
p.id.and_then(|id| id.point_id_options)
.map(|opts| match opts {
PointIdOptions::Uuid(s) => s,
PointIdOptions::Num(n) => n.to_string(),
})
.unwrap_or_default();
Some(SearchResult {
id,
score: p.score,
payload: EmbedPayload {
entity_type,
entity_id,
text,
extra,
},
})
})
.collect())
}
pub async fn search_with_filter(
&self,
vector: &[f32],
entity_type: &str,
limit: usize,
filter: Filter,
) -> crate::Result<Vec<SearchResult>> {
let collection_name = Self::collection_name(entity_type);
let search = SearchPointsBuilder::new(collection_name, vector.to_vec(), limit as u64)
.with_payload(true)
.filter(filter)
.build();
let results = self
.inner
.search_points(search)
.await
.map_err(|e| crate::AgentError::Qdrant(e.to_string()))?;
Ok(results
.result
.into_iter()
.filter_map(|p| {
let entity_type = p
.payload
.get(&"entity_type".to_string())
.map(Self::extract_string)
.unwrap_or_default();
let entity_id = p
.payload
.get(&"entity_id".to_string())
.map(Self::extract_string)
.unwrap_or_default();
let text = p
.payload
.get(&"text".to_string())
.map(Self::extract_string)
.unwrap_or_default();
let extra = p
.payload
.get(&"extra".to_string())
.and_then(|v| Some(Self::extract_string(v)))
.and_then(|s| serde_json::from_str::<serde_json::Value>(&s).ok());
let id =
p.id.and_then(|id| id.point_id_options)
.map(|opts| match opts {
PointIdOptions::Uuid(s) => s,
PointIdOptions::Num(n) => n.to_string(),
})
.unwrap_or_default();
Some(SearchResult {
id,
score: p.score,
payload: EmbedPayload {
entity_type,
entity_id,
text,
extra,
},
})
})
.collect())
}
pub async fn delete_by_filter(&self, entity_type: &str, entity_id: &str) -> crate::Result<()> {
let collection_name = Self::collection_name(entity_type);
let filter = Filter {
must: vec![Condition {
condition_one_of: Some(ConditionOneOf::Field(FieldCondition {
key: "entity_id".to_string(),
r#match: Some(Match {
match_value: Some(MatchValue::Keyword(entity_id.to_string())),
}),
..Default::default()
})),
}],
..Default::default()
};
let delete = DeletePointsBuilder::new(collection_name)
.points(filter)
.build();
self.inner
.delete_points(delete)
.await
.map_err(|e| crate::AgentError::Qdrant(e.to_string()))?;
Ok(())
}
pub async fn delete_collection(&self, entity_type: &str) -> crate::Result<()> {
let name = Self::collection_name(entity_type);
self.inner
.delete_collection(name)
.await
.map_err(|e| crate::AgentError::Qdrant(e.to_string()))?;
Ok(())
}
pub async fn ensure_memory_collection(&self, dimensions: u64) -> crate::Result<()> {
self.ensure_collection("memory", dimensions).await
}
pub async fn ensure_skill_collection(&self, dimensions: u64) -> crate::Result<()> {
self.ensure_collection("skill", dimensions).await
}
pub async fn search_memory(
&self,
vector: &[f32],
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
self.search(vector, "memory", limit).await
}
pub async fn search_skill(
&self,
vector: &[f32],
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
self.search(vector, "skill", limit).await
}
}

232
libs/agent/embed/service.rs Normal file
View File

@ -0,0 +1,232 @@
use async_trait::async_trait;
use qdrant_client::qdrant::Filter;
use sea_orm::DatabaseConnection;
use std::sync::Arc;
use super::client::{EmbedClient, EmbedPayload, EmbedVector, SearchResult};
#[async_trait]
pub trait Embeddable {
fn entity_type(&self) -> &'static str;
fn to_text(&self) -> String;
fn entity_id(&self) -> String;
}
pub struct EmbedService {
client: Arc<EmbedClient>,
db: DatabaseConnection,
model_name: String,
dimensions: u64,
}
impl EmbedService {
pub fn new(
client: EmbedClient,
db: DatabaseConnection,
model_name: String,
dimensions: u64,
) -> Self {
Self {
client: Arc::new(client),
db,
model_name,
dimensions,
}
}
pub async fn embed_issue(
&self,
id: &str,
title: &str,
body: Option<&str>,
) -> crate::Result<()> {
let text = match body {
Some(b) if !b.is_empty() => format!("{}\n\n{}", title, b),
_ => title.to_string(),
};
let vector = self.client.embed_text(&text, &self.model_name).await?;
let point = EmbedVector {
id: id.to_string(),
vector,
payload: EmbedPayload {
entity_type: "issue".to_string(),
entity_id: id.to_string(),
text,
extra: None,
},
};
self.client.upsert(vec![point]).await
}
pub async fn embed_repo(
&self,
id: &str,
name: &str,
description: Option<&str>,
) -> crate::Result<()> {
let text = match description {
Some(d) if !d.is_empty() => format!("{}: {}", name, d),
_ => name.to_string(),
};
let vector = self.client.embed_text(&text, &self.model_name).await?;
let point = EmbedVector {
id: id.to_string(),
vector,
payload: EmbedPayload {
entity_type: "repo".to_string(),
entity_id: id.to_string(),
text,
extra: None,
},
};
self.client.upsert(vec![point]).await
}
pub async fn embed_issues<T: Embeddable + Send + Sync>(
&self,
items: Vec<T>,
) -> crate::Result<()> {
if items.is_empty() {
return Ok(());
}
let texts: Vec<String> = items.iter().map(|i| i.to_text()).collect();
let embeddings = self.client.embed_batch(&texts, &self.model_name).await?;
let points: Vec<EmbedVector> = items
.into_iter()
.zip(embeddings.into_iter())
.map(|(item, vector)| EmbedVector {
id: item.entity_id(),
vector,
payload: EmbedPayload {
entity_type: item.entity_type().to_string(),
entity_id: item.entity_id(),
text: item.to_text(),
extra: None,
},
})
.collect();
self.client.upsert(points).await
}
pub async fn search_issues(
&self,
query: &str,
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
self.client
.search(query, "issue", &self.model_name, limit)
.await
}
pub async fn search_repos(
&self,
query: &str,
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
self.client
.search(query, "repo", &self.model_name, limit)
.await
}
pub async fn search_issues_filtered(
&self,
query: &str,
limit: usize,
filter: Filter,
) -> crate::Result<Vec<SearchResult>> {
self.client
.search_with_filter(query, "issue", &self.model_name, limit, filter)
.await
}
pub async fn delete_issue_embedding(&self, issue_id: &str) -> crate::Result<()> {
self.client.delete_by_entity_id("issue", issue_id).await
}
pub async fn delete_repo_embedding(&self, repo_id: &str) -> crate::Result<()> {
self.client.delete_by_entity_id("repo", repo_id).await
}
pub async fn ensure_collections(&self) -> crate::Result<()> {
self.client
.ensure_collection("issue", self.dimensions)
.await?;
self.client
.ensure_collection("repo", self.dimensions)
.await?;
self.client.ensure_skill_collection(self.dimensions).await?;
self.client.ensure_memory_collection(self.dimensions).await?;
Ok(())
}
pub fn db(&self) -> &DatabaseConnection {
&self.db
}
pub fn client(&self) -> &Arc<EmbedClient> {
&self.client
}
/// Embed a project skill into Qdrant for vector-based semantic search.
pub async fn embed_skill(
&self,
skill_id: i64,
name: &str,
description: Option<&str>,
content: &str,
project_uuid: &str,
) -> crate::Result<()> {
let desc = description.unwrap_or_default();
let id = skill_id.to_string();
self.client
.embed_skill(&id, name, desc, content, project_uuid)
.await
}
/// Search skills by semantic similarity within a project.
pub async fn search_skills(
&self,
query: &str,
project_uuid: &str,
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
self.client
.search_skills(query, &self.model_name, project_uuid, limit)
.await
}
/// Embed a conversation message into Qdrant as a memory vector.
pub async fn embed_memory(
&self,
message_id: i64,
text: &str,
room_id: &str,
user_id: Option<&str>,
) -> crate::Result<()> {
let id = message_id.to_string();
self.client
.embed_memory(&id, text, room_id, user_id)
.await
}
/// Search past conversation messages by semantic similarity within a room.
pub async fn search_memories(
&self,
query: &str,
room_id: &str,
limit: usize,
) -> crate::Result<Vec<SearchResult>> {
self.client
.search_memories(query, &self.model_name, room_id, limit)
.await
}
}

31
libs/agent/error.rs Normal file
View File

@ -0,0 +1,31 @@
use thiserror::Error;
#[derive(Error, Debug)]
pub enum AgentError {
#[error("openai error: {0}")]
OpenAi(String),
#[error("qdrant error: {0}")]
Qdrant(String),
#[error("internal error: {0}")]
Internal(String),
}
pub type Result<T> = std::result::Result<T, AgentError>;
impl From<async_openai::error::OpenAIError> for AgentError {
fn from(e: async_openai::error::OpenAIError) -> Self {
AgentError::OpenAi(e.to_string())
}
}
impl From<qdrant_client::QdrantError> for AgentError {
fn from(e: qdrant_client::QdrantError) -> Self {
AgentError::Qdrant(e.to_string())
}
}
impl From<sea_orm::DbErr> for AgentError {
fn from(e: sea_orm::DbErr) -> Self {
AgentError::Internal(e.to_string())
}
}

36
libs/agent/lib.rs Normal file
View File

@ -0,0 +1,36 @@
pub mod chat;
pub mod client;
pub mod compact;
pub mod embed;
pub mod error;
pub mod perception;
pub mod react;
pub mod task;
pub mod tokent;
pub mod tool;
pub use task::TaskService;
pub use tokent::{TokenUsage, resolve_usage};
pub use perception::{PerceptionService, SkillContext, SkillEntry, ToolCallEvent};
use async_openai::Client;
use async_openai::config::OpenAIConfig;
pub use chat::{
AiContextSenderType, AiRequest, AiStreamChunk, ChatService, Mention, RoomMessageContext,
StreamCallback,
};
pub use client::{AiCallResponse, AiClientConfig, call_with_params, call_with_retry};
pub use compact::{CompactConfig, CompactLevel, CompactService, CompactSummary, MessageSummary};
pub use embed::{EmbedClient, EmbedService, QdrantClient, SearchResult};
pub use error::{AgentError, Result};
pub use react::{
Hook, HookAction, NoopHook, ReactAgent, ReactConfig, ReactStep, ToolCallAction, TracingHook,
};
pub use tool::{
ToolCall, ToolCallResult, ToolContext, ToolDefinition, ToolError, ToolExecutor, ToolParam,
ToolRegistry, ToolResult, ToolSchema,
};
#[derive(Clone)]
pub struct AgentService {
pub client: Client<OpenAIConfig>,
}

View File

@ -0,0 +1,167 @@
//! Active skill awareness — proactive skill retrieval triggered by explicit user intent.
//!
//! The agent proactively loads a specific skill when the user explicitly references it
//! in their message. Patterns include:
//!
//! - Direct slug mention: "用 code-review", "使用 skill:code-review", "@code-review"
//! - Task-based invocation: "帮我 code review", "做一次 security scan"
//! - Intent keywords with skill context: "review 我的 PR", "scan for bugs"
//!
//! This is the highest-priority perception mode — if the user explicitly asks for a
//! skill, it always gets injected regardless of auto/passive scores.
use super::{SkillContext, SkillEntry};
use once_cell::sync::Lazy;
use regex::Regex;
/// Active skill awareness that detects explicit skill invocations in user messages.
#[derive(Debug, Clone, Default)]
pub struct ActiveSkillAwareness;
impl ActiveSkillAwareness {
pub fn new() -> Self {
Self
}
/// Detect if the user explicitly invoked a skill in their message.
///
/// Returns the first matching skill, or `None` if no explicit invocation is found.
///
/// Matching patterns:
/// - `用 <slug>` / `使用 <slug>` (Chinese: "use / apply <slug>")
/// - `skill:<slug>` (explicit namespace)
/// - `@<slug>` (GitHub-style mention)
/// - `帮我 <slug>` / `<name> 帮我` (Chinese: "help me <slug>")
/// - `做一次 <name>` / `进行一次 <name>` (Chinese: "do a <name>")
pub fn detect(&self, input: &str, skills: &[SkillEntry]) -> Option<SkillContext> {
let input_lower = input.to_lowercase();
// Try each matching pattern in priority order.
if let Some(skill) = self.match_by_prefix_pattern(&input_lower, skills) {
return Some(skill);
}
// Try matching by skill name (for natural language invocations).
if let Some(skill) = self.match_by_name(&input_lower, skills) {
return Some(skill);
}
// Try matching by slug substring in the message.
self.match_by_slug_substring(&input_lower, skills)
}
/// Pattern: "用 code-review", "使用 skill:xxx", "@xxx", "skill:xxx"
fn match_by_prefix_pattern(&self, input: &str, skills: &[SkillEntry]) -> Option<SkillContext> {
// Pattern 1: 英文 slug 前缀 "use ", "using ", "apply ", "with "
static USE_PAT: Lazy<Regex> =
Lazy::new(|| Regex::new(r"(?i)^\s*(?:use|using|apply|with)\s+([a-z0-9/_-]+)").unwrap());
if let Some(caps) = USE_PAT.captures(input) {
let slug = caps.get(1)?.as_str().trim();
return self.find_skill_by_slug(slug, skills);
}
// Pattern 2: skill:xxx
static SKILL_COLON_PAT: Lazy<Regex> =
Lazy::new(|| Regex::new(r"(?i)skill\s*:\s*([a-z0-9/_-]+)").unwrap());
if let Some(caps) = SKILL_COLON_PAT.captures(input) {
let slug = caps.get(1)?.as_str().trim();
return self.find_skill_by_slug(slug, skills);
}
// Pattern 3: @xxx (mention style)
static AT_PAT: Lazy<Regex> =
Lazy::new(|| Regex::new(r"@([a-z0-9][a-z0-9_/-]*[a-z0-9])").unwrap());
if let Some(caps) = AT_PAT.captures(input) {
let slug = caps.get(1)?.as_str().trim();
return self.find_skill_by_slug(slug, skills);
}
// Pattern 4: 帮我 xxx, 做一个 xxx, 进行 xxx, 做 xxx
static ZH_PAT: Lazy<Regex> = Lazy::new(
|| Regex::new(r"(?ix)[\u4e00-\u9fff]+\s+(?:帮我|做一个|进行一次|做|使用|用)\s+([a-z0-9][a-z0-9_/-]{0,30})")
.unwrap(),
);
if let Some(caps) = ZH_PAT.captures(input) {
let slug_or_name = caps.get(1)?.as_str().trim();
return self
.find_skill_by_slug(slug_or_name, skills)
.or_else(|| self.find_skill_by_name(slug_or_name, skills));
}
None
}
/// Match by skill name in natural language (e.g., "code review" → "code-review")
fn match_by_name(&self, input: &str, skills: &[SkillEntry]) -> Option<SkillContext> {
for skill in skills {
// Normalize skill name to a search pattern: "Code Review" -> "code review"
let name_lower = skill.name.to_lowercase();
// Direct substring match (the skill name appears in the input).
if input.contains(&name_lower) {
return Some(SkillContext {
label: format!("Active skill: {}", skill.name),
content: format!("# {} (actively invoked)\n\n{}", skill.name, skill.content),
});
}
// Try removing hyphens/underscores: "code-review" contains "code review"
let normalized_name = name_lower.replace(['-', '_'], " ");
if input.contains(&normalized_name) {
return Some(SkillContext {
label: format!("Active skill: {}", skill.name),
content: format!("# {} (actively invoked)\n\n{}", skill.name, skill.content),
});
}
}
None
}
/// Match by slug substring anywhere in the message.
fn match_by_slug_substring(&self, input: &str, skills: &[SkillEntry]) -> Option<SkillContext> {
// Remove common command words to isolate the slug.
let cleaned = input
.replace("please ", "")
.replace("帮我", "")
.replace("帮我review", "")
.replace("帮我 code review", "")
.replace("帮我review", "");
for skill in skills {
let slug = skill.slug.to_lowercase();
// Check if the slug (or any segment of it) appears as a word.
if cleaned.contains(&slug) || slug.split('/').any(|seg| cleaned.contains(seg) && seg.len() > 3)
{
return Some(SkillContext {
label: format!("Active skill: {}", skill.name),
content: format!("# {} (actively invoked)\n\n{}", skill.name, skill.content),
});
}
}
None
}
fn find_skill_by_slug(&self, slug: &str, skills: &[SkillEntry]) -> Option<SkillContext> {
let slug_lower = slug.to_lowercase();
skills.iter().find(|s| s.slug.to_lowercase() == slug_lower).map(|skill| {
SkillContext {
label: format!("Active skill: {}", skill.name),
content: format!("# {} (actively invoked)\n\n{}", skill.name, skill.content),
}
})
}
fn find_skill_by_name(&self, name: &str, skills: &[SkillEntry]) -> Option<SkillContext> {
let name_lower = name.to_lowercase();
skills.iter().find(|s| s.name.to_lowercase() == name_lower).map(|skill| {
SkillContext {
label: format!("Active skill: {}", skill.name),
content: format!("# {} (actively invoked)\n\n{}", skill.name, skill.content),
}
})
}
}

View File

@ -0,0 +1,178 @@
//! Auto skill awareness — background scanning for skill relevance.
//!
//! Periodically (or on-demand) scans the conversation context to identify
//! which enabled skills might be relevant, based on keyword overlap between
//! the skill's metadata (name, description, content snippets) and the
//! conversation text.
//!
//! This is the "ambient awareness" mode — the agent is always aware of
//! which skills might apply without the user explicitly invoking them.
use super::{SkillContext, SkillEntry};
/// Auto skill awareness config.
#[derive(Debug, Clone)]
pub struct AutoSkillAwareness {
/// Minimum keyword overlap score (0.01.0) to consider a skill relevant.
min_score: f32,
/// Maximum number of skills to inject via auto-awareness.
max_skills: usize,
}
impl Default for AutoSkillAwareness {
fn default() -> Self {
Self {
min_score: 0.15,
max_skills: 3,
}
}
}
impl AutoSkillAwareness {
pub fn new(min_score: f32, max_skills: usize) -> Self {
Self { min_score, max_skills }
}
/// Detect relevant skills by scoring keyword overlap between skill metadata
/// and the conversation text (current input + recent history).
///
/// Returns up to `max_skills` skills sorted by relevance score.
pub async fn detect(
&self,
current_input: &str,
history: &[String],
skills: &[SkillEntry],
) -> Vec<SkillContext> {
if skills.is_empty() {
return Vec::new();
}
// Build a combined corpus from current input and recent history (last 5 messages).
let history_text: String = history
.iter()
.rev()
.take(5)
.map(|s| s.as_str())
.collect::<Vec<_>>()
.join(" ");
let corpus = format!("{} {}", current_input, history_text).to_lowercase();
// Extract keywords from the corpus (split on whitespace + strip punctuation).
let corpus_keywords = Self::extract_keywords(&corpus);
if corpus_keywords.is_empty() {
return Vec::new();
}
// Score each skill.
let mut scored: Vec<_> = skills
.iter()
.map(|skill| {
let score = Self::score_skill(&corpus_keywords, skill);
(score, skill)
})
.filter(|(score, _)| *score >= self.min_score)
.collect();
// Sort descending by score.
scored.sort_by(|a, b| b.0.partial_cmp(&a.0).unwrap_or(std::cmp::Ordering::Equal));
scored
.into_iter()
.take(self.max_skills)
.map(|(_, skill)| {
// Extract a short relevant excerpt around the first keyword match.
let excerpt = Self::best_excerpt(&corpus, skill);
SkillContext {
label: format!("Auto skill: {}", skill.name),
content: excerpt,
}
})
.collect()
}
/// Extract meaningful keywords from text.
fn extract_keywords(text: &str) -> Vec<String> {
// Common English + Chinese stopwords to filter out.
const STOPWORDS: &[&str] = &[
"the", "a", "an", "is", "are", "was", "were", "be", "been", "being",
"have", "has", "had", "do", "does", "did", "will", "would", "could",
"should", "may", "might", "can", "to", "of", "in", "for", "on", "with",
"at", "by", "from", "as", "or", "and", "but", "if", "not", "no", "so",
"this", "that", "these", "those", "it", "its", "i", "you", "he", "she",
"we", "they", "what", "which", "who", "when", "where", "why", "how",
"all", "each", "every", "both", "few", "more", "most", "other", "some",
"such", "only", "own", "same", "than", "too", "very", "just", "also",
"now", "here", "there", "then", "once", "again", "always", "ever",
"", "", "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "", "", "", "", "",
"", "", "", "", "", "", "", "", "", "", "比较",
];
text.split_whitespace()
.filter(|w| {
let w_clean = w.trim_matches(|c: char| !c.is_alphanumeric());
w_clean.len() >= 3 && !STOPWORDS.contains(&w_clean)
})
.map(|w| w.to_lowercase())
.collect()
}
/// Score a skill by keyword overlap between the corpus keywords and the skill's
/// name + description + content (first 500 chars).
fn score_skill(corpus_keywords: &[String], skill: &SkillEntry) -> f32 {
let skill_text = format!(
"{} {}",
skill.name,
skill.description.as_deref().unwrap_or("")
);
let skill_text = skill_text.to_lowercase();
let skill_keywords = Self::extract_keywords(&skill_text);
let content_sample = skill.content.chars().take(500).collect::<String>().to_lowercase();
let content_keywords = Self::extract_keywords(&content_sample);
let all_skill_keywords = [&skill_keywords[..], &content_keywords[..]].concat();
if all_skill_keywords.is_empty() {
return 0.0;
}
let overlap: usize = corpus_keywords
.iter()
.filter(|kw| all_skill_keywords.iter().any(|sk| sk.contains(kw.as_str()) || kw.as_str().contains(sk.as_str())))
.count();
overlap as f32 / all_skill_keywords.len().max(1) as f32
}
/// Extract the best excerpt from skill content — the paragraph most relevant to the corpus.
fn best_excerpt(corpus: &str, skill: &SkillEntry) -> String {
// Try to find a relevant paragraph: one that shares the most keywords with corpus.
let corpus_kws = Self::extract_keywords(corpus);
let best_para = skill
.content
.split('\n')
.filter(|para| !para.trim().is_empty())
.map(|para| {
let para_kws = Self::extract_keywords(&para.to_lowercase());
let overlap: usize = corpus_kws
.iter()
.filter(|kw| para_kws.iter().any(|pk| pk.contains(kw.as_str()) || kw.as_str().contains(pk.as_str())))
.count();
(overlap, para)
})
.filter(|(score, _)| *score > 0)
.max_by_key(|(score, _)| *score);
if let Some((_, para)) = best_para {
// Return the best paragraph with a header.
format!("# {} (auto-matched)\n\n{}", skill.name, para.trim())
} else {
// Fallback: use first 300 chars of content as excerpt.
let excerpt = skill.content.chars().take(300).collect::<String>();
format!("# {} (auto-matched)\n\n{}...", skill.name, excerpt.trim())
}
}
}

View File

@ -0,0 +1,131 @@
//! Skill perception system for the AI agent.
//!
//! Provides three perception modes for injecting relevant skills into the agent's context:
//!
//! - **Auto (自动感知)**: Background awareness that scans conversation content for skill
//! relevance based on keyword matching and semantic similarity.
//!
//! - **Active (主动感知)**: Proactive skill retrieval triggered by explicit user intent,
//! such as mentioning a skill slug directly in the message. Both keyword and vector-based.
//!
//! - **Passive (被动感知)**: Reactive skill retrieval triggered by tool-call events,
//! such as when the agent mentions a specific skill in its reasoning. Both keyword and
//! vector-based.
pub mod active;
pub mod auto;
pub mod passive;
pub mod vector;
pub use active::ActiveSkillAwareness;
pub use auto::AutoSkillAwareness;
pub use passive::PassiveSkillAwareness;
pub use vector::{VectorActiveAwareness, VectorPassiveAwareness};
use async_openai::types::chat::ChatCompletionRequestMessage;
/// A chunk of skill context ready to be injected into the message list.
#[derive(Debug, Clone)]
pub struct SkillContext {
/// Human-readable label shown to the AI, e.g. "Active skill: code-review"
pub label: String,
/// The actual skill content to inject.
pub content: String,
}
/// Converts skill context into a system message for injection.
impl SkillContext {
pub fn to_system_message(self) -> ChatCompletionRequestMessage {
use async_openai::types::chat::{
ChatCompletionRequestSystemMessage,
ChatCompletionRequestSystemMessageContent,
};
ChatCompletionRequestMessage::System(ChatCompletionRequestSystemMessage {
content: ChatCompletionRequestSystemMessageContent::Text(format!(
"[{}]\n{}",
self.label, self.content
)),
..Default::default()
})
}
}
/// Unified perception service combining all three modes.
#[derive(Debug, Clone)]
pub struct PerceptionService {
pub auto: AutoSkillAwareness,
pub active: ActiveSkillAwareness,
pub passive: PassiveSkillAwareness,
}
impl Default for PerceptionService {
fn default() -> Self {
Self {
auto: AutoSkillAwareness::default(),
active: ActiveSkillAwareness::default(),
passive: PassiveSkillAwareness::default(),
}
}
}
impl PerceptionService {
/// Inject relevant skill context into the message list based on current conversation state.
///
/// - **auto**: Scans the current input and conversation history for skill-relevant keywords
/// and injects matching skills that are enabled.
/// - **active**: Checks if the user explicitly invoked a skill by slug (e.g. "用 code-review")
/// and injects it.
/// - **passive**: Checks if any tool-call events or prior observations mention a skill
/// slug and injects the matching skill.
///
/// Returns a list of system messages to prepend to the conversation.
pub async fn inject_skills(
&self,
input: &str,
history: &[String],
tool_calls: &[ToolCallEvent],
enabled_skills: &[SkillEntry],
) -> Vec<SkillContext> {
let mut results = Vec::new();
// Active: explicit skill invocation (highest priority)
if let Some(skill) = self.active.detect(input, enabled_skills) {
results.push(skill);
}
// Passive: triggered by tool-call events
for tc in tool_calls {
if let Some(skill) = self.passive.detect(tc, enabled_skills) {
if !results.iter().any(|r: &SkillContext| r.label == skill.label) {
results.push(skill);
}
}
}
// Auto: keyword-based relevance matching
let auto_results = self.auto.detect(input, history, enabled_skills).await;
for skill in auto_results {
if !results.iter().any(|r: &SkillContext| r.label == skill.label) {
results.push(skill);
}
}
results
}
}
/// A tool-call event used for passive skill detection.
#[derive(Debug, Clone)]
pub struct ToolCallEvent {
pub tool_name: String,
pub arguments: String,
}
/// A skill entry from the database, used for matching.
#[derive(Debug, Clone)]
pub struct SkillEntry {
pub slug: String,
pub name: String,
pub description: Option<String>,
pub content: String,
}

View File

@ -0,0 +1,144 @@
//! Passive skill awareness — reactive skill retrieval triggered by events.
//!
//! The agent passively activates a skill when its slug or name appears in:
//!
//! - Tool call arguments (e.g., a tool is called with a repository name that matches a "git" skill)
//! - Tool call results / observations (e.g., a linter reports issues matching a "code-review" skill)
//! - System events emitted during the agent loop (e.g., "PR opened" → "pr-review" skill)
//!
//! This is lower-priority than active but higher than auto — it's triggered by
//! specific events rather than ambient relevance scoring.
use super::{SkillContext, SkillEntry, ToolCallEvent};
/// Passive skill awareness triggered by tool-call and event context.
#[derive(Debug, Clone, Default)]
pub struct PassiveSkillAwareness;
impl PassiveSkillAwareness {
pub fn new() -> Self {
Self
}
/// Detect skill activation from tool-call events.
///
/// The agent can passively "wake up" a skill when:
/// - A tool call's name or arguments contain a skill slug or keyword
/// - A tool call result mentions a skill name
///
/// This is primarily driven by tool naming conventions and argument patterns.
/// For example, a tool named `git_diff` might passively activate a `git` skill.
pub fn detect(&self, event: &ToolCallEvent, skills: &[SkillEntry]) -> Option<SkillContext> {
let tool_name = event.tool_name.to_lowercase();
let args = event.arguments.to_lowercase();
for skill in skills {
let slug = skill.slug.to_lowercase();
let name = skill.name.to_lowercase();
// Trigger 1: Tool name contains skill slug segment.
// e.g., tool "git_blame" → skill "git/*" activates
if Self::slug_in_text(&tool_name, &slug) {
return Some(Self::context_from_skill(skill, "tool invocation"));
}
// Trigger 2: Tool arguments contain skill slug or name keywords.
// e.g., arguments mention "security" → "security/scan" skill
if Self::slug_in_text(&args, &slug) || Self::keyword_match(&args, &name) {
return Some(Self::context_from_skill(skill, "tool arguments"));
}
// Trigger 3: Common tool prefixes that map to skill categories.
if let Some(cat_skill) = Self::match_tool_category(&tool_name, skills) {
return Some(cat_skill);
}
}
None
}
/// Detect skill activation from a raw text observation (e.g., tool result text).
pub fn detect_from_text(&self, text: &str, skills: &[SkillEntry]) -> Option<SkillContext> {
let text_lower = text.to_lowercase();
for skill in skills {
let slug = skill.slug.to_lowercase();
let name = skill.name.to_lowercase();
if Self::slug_in_text(&text_lower, &slug) || Self::keyword_match(&text_lower, &name) {
return Some(Self::context_from_skill(skill, "observation match"));
}
}
None
}
/// Match common tool name prefixes to skill categories.
fn match_tool_category(tool_name: &str, skills: &[SkillEntry]) -> Option<SkillContext> {
let category_map = [
("git_", "git"),
("repo_", "repo"),
("issue_", "issue"),
("pr_", "pull_request"),
("pull_request_", "pull_request"),
("code_review", "code-review"),
("security_scan", "security"),
("linter", "linter"),
("test_", "testing"),
("deploy_", "deployment"),
("docker_", "docker"),
("k8s_", "kubernetes"),
("db_", "database"),
("sql_", "database"),
];
for (prefix, category) in category_map {
if tool_name.starts_with(prefix) {
if let Some(skill) = skills.iter().find(|s| {
s.slug.to_lowercase().contains(category)
|| s.name.to_lowercase().contains(category)
}) {
return Some(Self::context_from_skill(skill, "tool category match"));
}
}
}
None
}
/// True if the slug (or a significant segment of it) appears in the text.
fn slug_in_text(text: &str, slug: &str) -> bool {
text.contains(slug)
|| slug
.split('/')
.filter(|seg| seg.len() >= 3)
.any(|seg| text.contains(seg))
}
/// Match skill name keywords against the text (handles multi-word names).
fn keyword_match(text: &str, name: &str) -> bool {
// For multi-word names, require all significant words to appear.
let significant: Vec<_> = name
.split(|c: char| !c.is_alphanumeric())
.filter(|w| w.len() >= 3)
.collect();
if significant.len() >= 2 {
significant.iter().all(|w| text.contains(*w))
} else {
significant.first().map_or(false, |w| text.contains(w))
}
}
fn context_from_skill(skill: &SkillEntry, trigger: &str) -> SkillContext {
SkillContext {
label: format!("Passive skill: {} ({})", skill.name, trigger),
content: format!(
"# {} (passive — {})\n\n{}",
skill.name,
trigger,
skill.content
),
}
}
}

View File

@ -0,0 +1,163 @@
//! Vector-based skill and memory awareness using Qdrant embeddings.
//!
//! Leverages semantic similarity search to find relevant skills and conversation
//! memories based on vector embeddings. This is more powerful than keyword matching
//! because it captures semantic meaning, not just surface-level word overlap.
//!
//! - **VectorActiveAwareness**: Searches skills by semantic similarity when the user
//! sends a message, finding skills relevant to the conversation topic.
//!
//! - **VectorPassiveAwareness**: Searches past conversation memories to provide relevant
//! historical context when similar topics arise, based on tool-call patterns.
use async_openai::types::chat::{
ChatCompletionRequestMessage, ChatCompletionRequestSystemMessage,
ChatCompletionRequestSystemMessageContent,
};
use crate::embed::EmbedService;
use crate::perception::SkillContext;
/// Maximum relevant memories to inject.
const MAX_MEMORY_RESULTS: usize = 3;
/// Minimum similarity score (0.01.0) for memories.
const MIN_MEMORY_SCORE: f32 = 0.72;
/// Maximum skills to return from vector search.
const MAX_SKILL_RESULTS: usize = 3;
/// Minimum similarity score for skills.
const MIN_SKILL_SCORE: f32 = 0.70;
/// Vector-based active skill awareness — semantic search for relevant skills.
///
/// When the user sends a message, this awareness mode searches the Qdrant skill index
/// for skills whose content is semantically similar to the message, even if no keywords
/// match directly. This captures intent beyond explicit skill mentions.
#[derive(Debug, Clone)]
pub struct VectorActiveAwareness {
pub max_skills: usize,
pub min_score: f32,
}
impl Default for VectorActiveAwareness {
fn default() -> Self {
Self {
max_skills: MAX_SKILL_RESULTS,
min_score: MIN_SKILL_SCORE,
}
}
}
impl VectorActiveAwareness {
/// Search for skills semantically relevant to the user's input.
///
/// Uses Qdrant vector search within the given project to find skills whose
/// embedded content is similar to `query`. Only returns results above `min_score`.
pub async fn detect(
&self,
embed_service: &EmbedService,
query: &str,
project_uuid: &str,
) -> Vec<SkillContext> {
let results = match embed_service
.search_skills(query, project_uuid, self.max_skills)
.await
{
Ok(results) => results,
Err(_) => return Vec::new(),
};
results
.into_iter()
.filter(|r| r.score >= self.min_score)
.map(|r| {
let name = r
.payload
.extra
.as_ref()
.and_then(|v| v.get("name"))
.and_then(|v| v.as_str())
.unwrap_or("skill")
.to_string();
SkillContext {
label: format!("[Vector] Skill: {}", name),
content: format!(
"[Relevant skill (score {:.2})]\n{}",
r.score, r.payload.text
),
}
})
.collect()
}
}
/// Vector-based passive memory awareness — retrieve relevant past context.
///
/// When the agent encounters a topic (via tool-call or observation), this awareness
/// searches past conversation messages to find semantically similar prior discussions.
/// This gives the agent memory of how similar situations were handled before.
#[derive(Debug, Clone)]
pub struct VectorPassiveAwareness {
pub max_memories: usize,
pub min_score: f32,
}
impl Default for VectorPassiveAwareness {
fn default() -> Self {
Self {
max_memories: MAX_MEMORY_RESULTS,
min_score: MIN_MEMORY_SCORE,
}
}
}
impl VectorPassiveAwareness {
/// Search for past conversation messages semantically similar to the current context.
///
/// Uses Qdrant to find memories within the same room that share semantic similarity
/// with the given query (usually the current input or a tool-call description).
/// High-scoring results suggest prior discussions on this topic.
pub async fn detect(
&self,
embed_service: &EmbedService,
query: &str,
room_id: &str,
) -> Vec<MemoryContext> {
let results = match embed_service
.search_memories(query, room_id, self.max_memories)
.await
{
Ok(results) => results,
Err(_) => return Vec::new(),
};
results
.into_iter()
.filter(|r| r.score >= self.min_score)
.map(|r| MemoryContext {
score: r.score,
content: r.payload.text,
})
.collect()
}
}
/// A retrieved memory entry from vector search.
#[derive(Debug, Clone)]
pub struct MemoryContext {
/// Similarity score (0.01.0).
pub score: f32,
/// The text of the past conversation message.
pub content: String,
}
impl MemoryContext {
/// Format as a system message for injection into the agent context.
pub fn to_system_message(self) -> ChatCompletionRequestMessage {
ChatCompletionRequestMessage::System(ChatCompletionRequestSystemMessage {
content: ChatCompletionRequestSystemMessageContent::Text(format!(
"[Relevant memory (score {:.2})]\n{}",
self.score, self.content
)),
..Default::default()
})
}
}

130
libs/agent/react/hooks.rs Normal file
View File

@ -0,0 +1,130 @@
//! Observability hooks for the ReAct agent loop.
//!
//! Hooks allow injecting custom behavior (logging, tracing, filtering, termination)
//! at each step of the reasoning loop without coupling to the core agent logic.
//!
//! Inspired by rig's `PromptHook` trait.
//!
//! # Example
//!
//! ```ignore
//! #[derive(Clone)]
//! struct MyHook;
//!
//! impl Hook for MyHook {
//! async fn on_thought(&self, step: usize, thought: &str) -> HookAction {
//! tracing::info!("[step {}] thinking: {}", step, thought);
//! HookAction::Continue
//! }
//! }
//!
//! let agent = ReactAgent::new(prompt, tools, config).with_hook(MyHook);
//! ```
use async_trait::async_trait;
/// Controls whether the agent loop continues after a hook callback.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum HookAction {
/// Continue processing normally.
Continue,
/// Skip the current step and continue.
Skip,
/// Terminate the loop immediately with the given reason.
Terminate(&'static str),
}
/// Controls behavior after a tool call hook callback.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ToolCallAction {
/// Execute the tool normally.
Continue,
/// Skip tool execution and inject a custom result.
Skip(String),
/// Terminate the loop with the given reason.
Terminate(&'static str),
}
/// Default no-op hook that does nothing.
#[derive(Debug, Clone, Copy, Default)]
pub struct NoopHook;
impl Hook for NoopHook {}
impl Hook for () {}
/// A hook that logs everything to stderr using `eprintln`.
/// No external dependencies required.
#[derive(Debug, Clone, Copy, Default)]
pub struct TracingHook;
impl TracingHook {
pub fn new() -> Self {
Self
}
}
#[async_trait]
impl Hook for TracingHook {
async fn on_thought(&self, step: usize, thought: &str) -> HookAction {
eprintln!("[step {}] thought: {}", step, thought);
HookAction::Continue
}
async fn on_tool_call(&self, step: usize, name: &str, args_json: &str) -> ToolCallAction {
eprintln!("[step {}] tool_call: {}({})", step, name, args_json);
ToolCallAction::Continue
}
async fn on_observation(&self, step: usize, observation: &str) -> HookAction {
eprintln!("[step {}] observation: {}", step, observation);
HookAction::Continue
}
async fn on_answer(&self, step: usize, answer: &str) -> HookAction {
eprintln!("[step {}] answer: {}", step, answer);
HookAction::Continue
}
}
/// Hook trait for observing and controlling the ReAct agent loop.
///
/// Implement this trait to inject custom behavior at each step:
/// - Log thoughts, tool calls, observations, and final answers
/// - Filter or redact sensitive data
/// - Dynamically terminate the loop based on content
/// - Inject custom tool results (e.g., for testing or sandboxing)
///
/// All methods have default no-op implementations, so you only need to
/// override the ones you care about.
///
/// The hook is called synchronously during the agent loop. Keep hook
/// callbacks fast — avoid blocking I/O. For heavy work, spawn a task
/// and return immediately.
#[async_trait]
pub trait Hook: Send + Sync {
/// Called when the agent emits a thought/reasoning step.
///
/// Return `HookAction::Terminate` to stop the loop early.
async fn on_thought(&self, _step: usize, _thought: &str) -> HookAction {
HookAction::Continue
}
/// Called just before a tool is executed.
///
/// Return `ToolCallAction::Skip(result)` to skip execution and inject `result` instead.
/// Return `ToolCallAction::Terminate` to stop the loop without executing the tool.
async fn on_tool_call(&self, _step: usize, _name: &str, _args_json: &str) -> ToolCallAction {
ToolCallAction::Continue
}
/// Called after a tool returns an observation.
async fn on_observation(&self, _step: usize, _observation: &str) -> HookAction {
HookAction::Continue
}
/// Called when the agent produces a final answer.
async fn on_answer(&self, _step: usize, _answer: &str) -> HookAction {
HookAction::Continue
}
}

View File

@ -0,0 +1,439 @@
//! ReAct (Reasoning + Acting) agent core.
use async_openai::types::chat::FunctionCall;
use async_openai::types::chat::{
ChatCompletionMessageToolCall, ChatCompletionMessageToolCalls,
ChatCompletionRequestAssistantMessage, ChatCompletionRequestAssistantMessageContent,
ChatCompletionRequestMessage, ChatCompletionRequestToolMessage,
ChatCompletionRequestToolMessageContent, ChatCompletionRequestUserMessage,
ChatCompletionRequestUserMessageContent,
};
use uuid::Uuid;
use std::sync::Arc;
use crate::call_with_params;
use crate::error::{AgentError, Result};
use crate::react::hooks::{Hook, HookAction, NoopHook, ToolCallAction};
use crate::react::types::{Action, ReactConfig, ReactStep};
pub use crate::react::types::{ReactConfig as ReActConfig, ReactStep as ReActStep};
/// A ReAct agent that performs multi-step tool-augmented reasoning.
#[derive(Clone)]
pub struct ReactAgent {
messages: Vec<ChatCompletionRequestMessage>,
#[allow(dead_code)]
tool_definitions: Vec<async_openai::types::chat::ChatCompletionTool>,
config: ReactConfig,
step_count: usize,
hook: Arc<dyn Hook>,
}
impl ReactAgent {
/// Create a new agent with a system prompt and OpenAI tool definitions.
pub fn new(
system_prompt: &str,
tools: Vec<async_openai::types::chat::ChatCompletionTool>,
config: ReactConfig,
) -> Self {
let messages = vec![ChatCompletionRequestMessage::User(
ChatCompletionRequestUserMessage {
content: ChatCompletionRequestUserMessageContent::Text(system_prompt.to_string()),
..Default::default()
},
)];
Self {
messages,
tool_definitions: tools,
config,
step_count: 0,
hook: Arc::new(NoopHook),
}
}
/// Add an initial user message to the conversation.
pub fn add_user_message(&mut self, content: &str) {
self.messages.push(ChatCompletionRequestMessage::User(
ChatCompletionRequestUserMessage {
content: ChatCompletionRequestUserMessageContent::Text(content.to_string()),
..Default::default()
},
));
}
/// Attach a hook to observe and control the agent loop.
///
/// Hooks can log steps, filter content, inject custom tool results,
/// or terminate the loop early. Multiple `.with_hook()` calls replace
/// the previous hook.
///
/// # Example
///
/// ```ignore
/// #[derive(Clone)]
/// struct MyLogger;
///
/// impl Hook for MyLogger {
/// async fn on_thought(&self, step: usize, thought: &str) -> HookAction {
/// eprintln!("[step {}] thought: {}", step, thought);
/// HookAction::Continue
/// }
/// }
///
/// let agent = ReactAgent::new(prompt, tools, config).with_hook(MyLogger);
/// ```
pub fn with_hook<H: Hook + 'static>(mut self, hook: H) -> Self {
self.hook = Arc::new(hook);
self
}
/// Run the ReAct loop until a final answer is produced or `max_steps` is reached.
///
/// Yields streaming chunks via `on_chunk`. Each step produces:
/// - A `ReactStep::Thought` chunk when the AI emits reasoning
/// - A `ReactStep::Action` chunk when the AI emits a tool call
/// - A `ReactStep::Observation` chunk after the tool executes
/// - A `ReactStep::Answer` chunk when the loop terminates with a final answer
///
/// Hooks are called at each phase (see [Hook]). Return [HookAction::Terminate]
/// from any hook to stop the loop early.
pub async fn run<C>(
&mut self,
model_name: &str,
client_config: &crate::client::AiClientConfig,
mut on_chunk: C,
) -> Result<String>
where
C: FnMut(ReactStep) + Send,
{
loop {
if self.step_count >= self.config.max_steps {
return Err(AgentError::Internal(format!(
"ReAct agent reached max steps ({})",
self.config.max_steps
)));
}
self.step_count += 1;
let step = self.step_count;
let response = call_with_params(
&self.messages,
model_name,
client_config,
0.2, // temperature
4096, // max output tokens
None,
if self.tool_definitions.is_empty() {
None
} else {
Some(self.tool_definitions.as_slice())
},
)
.await?;
let parsed = parse_react_response(&response.content);
let answer = parsed.answer.clone();
let action = parsed.action.clone();
// Emit thought step.
on_chunk(ReactStep::Thought {
step,
thought: parsed.thought.clone(),
});
// Hook: thought
match self.hook.on_thought(step, &parsed.thought).await {
HookAction::Terminate(reason) => {
return Err(AgentError::Internal(format!(
"hook terminated at thought step: {}",
reason
)));
}
HookAction::Skip => {
// Skip this step, go directly to answer if present
}
HookAction::Continue => {}
}
// Final answer — emit and return.
if let Some(ans) = answer {
on_chunk(ReactStep::Answer {
step,
answer: ans.clone(),
});
// Hook: answer
match self.hook.on_answer(step, &ans).await {
HookAction::Terminate(reason) => {
return Err(AgentError::Internal(format!(
"hook terminated at answer step: {}",
reason
)));
}
_ => {}
}
return Ok(ans);
}
// No answer — either do a tool call or fall back.
let Some(act) = action else {
let content = response.content.clone();
on_chunk(ReactStep::Answer {
step,
answer: content.clone(),
});
// Hook: answer (fallback)
match self.hook.on_answer(step, &content).await {
HookAction::Terminate(reason) => {
return Err(AgentError::Internal(format!(
"hook terminated at fallback answer: {}",
reason
)));
}
_ => {}
}
return Ok(content);
};
on_chunk(ReactStep::Action {
step,
action: act.clone(),
});
let args_json = serde_json::to_string(&act.args).unwrap_or_else(|_| "{}".to_string());
// Hook: tool call — can skip or terminate
match self.hook.on_tool_call(step, &act.name, &args_json).await {
ToolCallAction::Terminate(reason) => {
return Err(AgentError::Internal(format!(
"hook terminated at tool call: {}",
reason
)));
}
ToolCallAction::Skip(injected_result) => {
// Skip actual execution, inject the provided result
let observation = injected_result;
on_chunk(ReactStep::Observation {
step,
observation: observation.clone(),
});
// Hook: observation (injected)
match self.hook.on_observation(step, &observation).await {
HookAction::Terminate(reason) => {
return Err(AgentError::Internal(format!(
"hook terminated at observation (injected): {}",
reason
)));
}
_ => {}
}
// Append observation as a tool message so the model sees it in context.
self.messages.push(ChatCompletionRequestMessage::Tool(
ChatCompletionRequestToolMessage {
tool_call_id: act.id.clone(),
content: ChatCompletionRequestToolMessageContent::Text(observation),
},
));
continue;
}
ToolCallAction::Continue => {}
}
// Append the assistant message with tool_calls to history.
let assistant_msg = build_tool_call_message(&act);
self.messages.push(assistant_msg);
// Execute the tool.
let observation = match &self.config.tool_executor {
Some(exec) => {
let result = exec(act.name.clone(), act.args.clone()).await;
match result {
Ok(v) => serde_json::to_string(&v).unwrap_or_else(|_| "null".to_string()),
Err(e) => serde_json::json!({ "error": e }).to_string(),
}
}
None => serde_json::json!({
"error": format!("no tool executor registered for '{}'", act.name)
})
.to_string(),
};
on_chunk(ReactStep::Observation {
step,
observation: observation.clone(),
});
// Hook: observation
match self.hook.on_observation(step, &observation).await {
HookAction::Terminate(reason) => {
return Err(AgentError::Internal(format!(
"hook terminated at observation step: {}",
reason
)));
}
_ => {}
}
// Append observation as a tool message so the model sees it in context.
self.messages.push(ChatCompletionRequestMessage::Tool(
ChatCompletionRequestToolMessage {
tool_call_id: act.id.clone(),
content: ChatCompletionRequestToolMessageContent::Text(observation),
},
));
}
}
/// Returns the number of steps executed so far.
pub fn steps(&self) -> usize {
self.step_count
}
}
// ---------------------------------------------------------------------------
// Response parsing
// ---------------------------------------------------------------------------
struct ParsedReActResponse {
thought: String,
action: Option<Action>,
answer: Option<String>,
}
/// Parse the AI's text response into a ReAct step.
///
/// The AI is prompted (via system prompt in `ReactAgent::new`) to respond with
/// JSON in one of these forms:
///
/// ```json
/// { "thought": "...", "action": { "name": "tool_name", "arguments": {...} } }
/// { "thought": "...", "answer": "final answer text" }
/// ```
fn parse_react_response(content: &str) -> ParsedReActResponse {
let json_str = extract_json(content).unwrap_or_else(|| content.trim().to_string());
#[derive(serde::Deserialize)]
struct RawStep {
#[serde(default)]
thought: Option<String>,
#[serde(default)]
action: Option<RawAction>,
#[serde(default)]
answer: Option<String>,
#[serde(default)]
name: Option<String>,
#[serde(default, rename = "arguments")]
args: Option<serde_json::Value>,
}
#[derive(serde::Deserialize)]
struct RawAction {
#[serde(default)]
name: Option<String>,
#[serde(default, rename = "arguments")]
args: Option<serde_json::Value>,
}
match serde_json::from_str::<RawStep>(&json_str) {
Ok(raw) => {
let thought = raw.thought.unwrap_or_else(|| "Thinking...".to_string());
let answer = raw.answer;
let action = raw.action.map(|a| Action {
id: Uuid::new_v4().to_string(),
name: a.name.unwrap_or_default(),
args: a.args.unwrap_or(serde_json::Value::Null),
});
// Handle flat format: { "name": "...", "arguments": {...} }
let action = action.or_else(|| {
if raw.name.is_some() || raw.args.is_some() {
Some(Action {
id: Uuid::new_v4().to_string(),
name: raw.name.unwrap_or_default(),
args: raw.args.unwrap_or(serde_json::Value::Null),
})
} else {
None
}
});
ParsedReActResponse {
thought,
action,
answer,
}
}
Err(_) => ParsedReActResponse {
thought: content.to_string(),
action: None,
answer: None,
},
}
}
/// Extract the first JSON object or array from a string, handling markdown fences.
fn extract_json(s: &str) -> Option<String> {
let trimmed = s.trim();
if trimmed.starts_with('{') || trimmed.starts_with('[') {
return Some(trimmed.to_string());
}
for line in trimmed.lines() {
let line = line.trim();
if line.starts_with("```json") || line.starts_with("```") {
let mut buf = String::new();
let mut found_start = false;
for l in trimmed.lines() {
let l = l.trim();
if !found_start && (l == "```json" || l == "```") {
found_start = true;
continue;
}
if found_start && l == "```" {
break;
}
if found_start {
buf.push_str(l);
buf.push('\n');
}
}
let result = buf.trim().to_string();
if !result.is_empty() {
return Some(result);
}
}
}
None
}
/// Build an assistant message with tool_calls from an Action.
#[allow(deprecated)]
fn build_tool_call_message(action: &Action) -> ChatCompletionRequestMessage {
let fn_arg_str = serde_json::to_string(&action.args).unwrap_or_else(|_| "{}".to_string());
ChatCompletionRequestMessage::Assistant(ChatCompletionRequestAssistantMessage {
content: Some(ChatCompletionRequestAssistantMessageContent::Text(format!(
"Action: {}",
action.name
))),
name: None,
refusal: None,
audio: None,
tool_calls: Some(vec![ChatCompletionMessageToolCalls::Function(
ChatCompletionMessageToolCall {
id: action.id.clone(),
function: FunctionCall {
name: action.name.clone(),
arguments: fn_arg_str,
},
},
)]),
function_call: None,
})
}

13
libs/agent/react/mod.rs Normal file
View File

@ -0,0 +1,13 @@
//! ReAct (Reason + Act) agent loop for structured tool use.
//!
//! The agent alternates between a **thought** phase (reasoning about what to do)
//! and an **action** phase (calling tools). Observations from tool results feed
//! back into the next thought, enabling multi-step reasoning.
pub mod hooks;
pub mod loop_core;
pub mod types;
pub use hooks::{Hook, HookAction, NoopHook, ToolCallAction, TracingHook};
pub use loop_core::ReactAgent;
pub use types::{ReactConfig, ReactStep};

94
libs/agent/react/types.rs Normal file
View File

@ -0,0 +1,94 @@
//! ReAct agent types.
use std::sync::Arc;
use serde::{Deserialize, Serialize};
use uuid::Uuid;
/// Callback for executing a named tool with JSON arguments.
pub type ToolExecutorFn = Arc<
dyn Fn(
String,
serde_json::Value,
) -> std::pin::Pin<
Box<dyn std::future::Future<Output = Result<serde_json::Value, String>> + Send>,
> + Send
+ Sync,
>;
/// Configuration for a ReAct agent.
#[derive(Clone)]
pub struct ReactConfig {
/// Maximum number of ReAct steps before giving up.
pub max_steps: usize,
/// Stop sequences that trigger early termination.
pub stop_sequences: Vec<String>,
/// Optional tool executor callback. If `None`, tool calls return an error.
pub tool_executor: Option<ToolExecutorFn>,
}
impl Default for ReactConfig {
fn default() -> Self {
Self {
max_steps: 10,
stop_sequences: Vec::new(),
tool_executor: None,
}
}
}
impl std::fmt::Debug for ReactConfig {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ReactConfig")
.field("max_steps", &self.max_steps)
.field("stop_sequences", &self.stop_sequences)
.field(
"tool_executor",
&self
.tool_executor
.as_ref()
.map(|_| "...")
.unwrap_or("<none>"),
)
.finish()
}
}
/// An action (tool call) requested by the ReAct agent.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Action {
pub id: String,
pub name: String,
pub args: serde_json::Value,
}
impl Action {
pub fn new(name: &str, args: serde_json::Value) -> Self {
Self {
id: Uuid::new_v4().to_string(),
name: name.to_string(),
args,
}
}
}
// ---------------------------------------------------------------------------
// Step events emitted during the ReAct loop
// ---------------------------------------------------------------------------
/// A single event emitted during a ReAct step.
///
/// These are yielded via the streaming callback so the caller (service layer)
/// can persist them to the database or forward them to a WebSocket client.
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ReactStep {
/// The AI's reasoning/thinking for this step.
Thought { step: usize, thought: String },
/// The AI requested a tool call.
Action { step: usize, action: Action },
/// Result returned by the executed tool.
Observation { step: usize, observation: String },
/// Final answer produced by the agent.
Answer { step: usize, answer: String },
}

22
libs/agent/task/mod.rs Normal file
View File

@ -0,0 +1,22 @@
//! Agent task service — unified task/sub-agent execution framework.
//!
//! A task (`agent_task` record) can be:
//! - A **root task**: initiated by a user or system event.
//! The parent/Supervisor agent spawns sub-tasks and coordinates their results.
//! - A **sub-task**: a unit of work executed by a sub-agent.
//!
//! Execution flow:
//! 1. Create task record (status = pending)
//! 2. Notify listeners (WebSocket: task_started)
//! 3. Spawn execution (tokio::spawn or via room queue)
//! 4. Update progress (status = running, progress = "step 2/5: ...")
//! 5. On completion: update output + status = done / error + status = failed
//! 6. Notify listeners (WebSocket: task_done)
//! 7. If root task: notify parent/Supervisor to aggregate results
//!
//! This module is intentionally kept simple and synchronous with the DB.
//! Long-running execution is delegated to the caller (tokio::spawn).
pub mod service;
pub use service::TaskService;

209
libs/agent/task/service.rs Normal file
View File

@ -0,0 +1,209 @@
//! Task service for creating, tracking, and executing agent tasks.
//!
//! All methods are async and interact with the database directly.
//! Execution of the task logic (running the ReAct loop, etc.) is delegated
//! to the caller — this service only manages task lifecycle and state.
use db::database::AppDatabase;
use models::agent_task::{
ActiveModel, AgentType, Column as C, Entity, Model, TaskStatus,
};
use sea_orm::{
entity::EntityTrait, query::{QueryFilter, QueryOrder, QuerySelect}, ActiveModelTrait,
ColumnTrait, DbErr,
};
/// Service for managing agent tasks (root tasks and sub-tasks).
#[derive(Clone)]
pub struct TaskService {
db: AppDatabase,
}
impl TaskService {
pub fn new(db: AppDatabase) -> Self {
Self { db }
}
/// Create a new task (root or sub-task) with status = pending.
pub async fn create(
&self,
project_uuid: impl Into<uuid::Uuid>,
input: impl Into<String>,
agent_type: AgentType,
) -> Result<Model, DbErr> {
self.create_with_parent(project_uuid, None, input, agent_type, None).await
}
/// Create a new sub-task with a parent reference.
pub async fn create_subtask(
&self,
project_uuid: impl Into<uuid::Uuid>,
parent_id: i64,
input: impl Into<String>,
agent_type: AgentType,
title: Option<String>,
) -> Result<Model, DbErr> {
self.create_with_parent(project_uuid, Some(parent_id), input, agent_type, title)
.await
}
async fn create_with_parent(
&self,
project_uuid: impl Into<uuid::Uuid>,
parent_id: Option<i64>,
input: impl Into<String>,
agent_type: AgentType,
title: Option<String>,
) -> Result<Model, DbErr> {
let model = ActiveModel {
project_uuid: sea_orm::Set(project_uuid.into()),
parent_id: sea_orm::Set(parent_id),
agent_type: sea_orm::Set(agent_type),
status: sea_orm::Set(TaskStatus::Pending),
title: sea_orm::Set(title),
input: sea_orm::Set(input.into()),
..Default::default()
};
model.insert(&self.db).await
}
/// Mark a task as running and record the start time.
pub async fn start(&self, task_id: i64) -> Result<Model, DbErr> {
let model = Entity::find_by_id(task_id).one(&self.db).await?;
let model = model.ok_or_else(|| {
DbErr::RecordNotFound("agent_task not found".to_string())
})?;
let mut active: ActiveModel = model.into();
active.status = sea_orm::Set(TaskStatus::Running);
active.started_at = sea_orm::Set(Some(chrono::Utc::now().into()));
active.updated_at = sea_orm::Set(chrono::Utc::now().into());
active.update(&self.db).await
}
/// Update progress text (e.g., "step 2/5: analyzing PR").
pub async fn update_progress(&self, task_id: i64, progress: impl Into<String>) -> Result<(), DbErr> {
let model = Entity::find_by_id(task_id).one(&self.db).await?;
let model = model.ok_or_else(|| {
DbErr::RecordNotFound("agent_task not found".to_string())
})?;
let mut active: ActiveModel = model.into();
active.progress = sea_orm::Set(Some(progress.into()));
active.updated_at = sea_orm::Set(chrono::Utc::now().into());
active.update(&self.db).await?;
Ok(())
}
/// Mark a task as completed with the output text.
pub async fn complete(&self, task_id: i64, output: impl Into<String>) -> Result<Model, DbErr> {
let model = Entity::find_by_id(task_id).one(&self.db).await?;
let model = model.ok_or_else(|| {
DbErr::RecordNotFound("agent_task not found".to_string())
})?;
let mut active: ActiveModel = model.into();
active.status = sea_orm::Set(TaskStatus::Done);
active.output = sea_orm::Set(Some(output.into()));
active.done_at = sea_orm::Set(Some(chrono::Utc::now().into()));
active.updated_at = sea_orm::Set(chrono::Utc::now().into());
active.update(&self.db).await
}
/// Mark a task as failed with an error message.
pub async fn fail(&self, task_id: i64, error: impl Into<String>) -> Result<Model, DbErr> {
let model = Entity::find_by_id(task_id).one(&self.db).await?;
let model = model.ok_or_else(|| {
DbErr::RecordNotFound("agent_task not found".to_string())
})?;
let mut active: ActiveModel = model.into();
active.status = sea_orm::Set(TaskStatus::Failed);
active.error = sea_orm::Set(Some(error.into()));
active.done_at = sea_orm::Set(Some(chrono::Utc::now().into()));
active.updated_at = sea_orm::Set(chrono::Utc::now().into());
active.update(&self.db).await
}
/// Get a task by ID.
pub async fn get(&self, task_id: i64) -> Result<Option<Model>, DbErr> {
Entity::find_by_id(task_id).one(&self.db).await
}
/// List all sub-tasks for a parent task.
pub async fn children(&self, parent_id: i64) -> Result<Vec<Model>, DbErr> {
Entity::find()
.filter(C::ParentId.eq(parent_id))
.order_by_asc(C::CreatedAt)
.all(&self.db)
.await
}
/// List all active (non-terminal) tasks for a project.
pub async fn active_tasks(&self, project_uuid: impl Into<uuid::Uuid>) -> Result<Vec<Model>, DbErr> {
let uuid: uuid::Uuid = project_uuid.into();
Entity::find()
.filter(C::ProjectUuid.eq(uuid))
.filter(C::Status.is_in([TaskStatus::Pending, TaskStatus::Running]))
.order_by_desc(C::CreatedAt)
.all(&self.db)
.await
}
/// List all tasks (root only) for a project.
pub async fn list(
&self,
project_uuid: impl Into<uuid::Uuid>,
limit: u64,
) -> Result<Vec<Model>, DbErr> {
let uuid: uuid::Uuid = project_uuid.into();
Entity::find()
.filter(C::ProjectUuid.eq(uuid))
.filter(C::ParentId.is_null())
.order_by_desc(C::CreatedAt)
.limit(limit)
.all(&self.db)
.await
}
/// Delete a task and all its sub-tasks recursively.
/// Only allows deletion of root tasks.
pub async fn delete(&self, task_id: i64) -> Result<(), DbErr> {
self.delete_recursive(task_id).await
}
async fn delete_recursive(&self, task_id: i64) -> Result<(), DbErr> {
// Collect all task IDs to delete using an explicit stack (avoiding async recursion).
let mut stack = vec![task_id];
let mut idx = 0;
while idx < stack.len() {
let current = stack[idx];
let children = Entity::find()
.filter(C::ParentId.eq(current))
.all(&self.db)
.await?;
for child in children {
stack.push(child.id);
}
idx += 1;
}
for task_id in stack {
let model = Entity::find_by_id(task_id).one(&self.db).await?;
if let Some(m) = model {
let active: ActiveModel = m.into();
active.delete(&self.db).await?;
}
}
Ok(())
}
/// Check if all sub-tasks of a given parent are done.
pub async fn are_children_done(&self, parent_id: i64) -> Result<bool, DbErr> {
let children = self.children(parent_id).await?;
if children.is_empty() {
return Ok(true);
}
Ok(children.iter().all(|c| c.is_done()))
}
}

199
libs/agent/tokent.rs Normal file
View File

@ -0,0 +1,199 @@
//! Token counting utilities using tiktoken.
//!
//! Provides accurate token counting for OpenAI-compatible models.
//! Uses the `tiktoken-rs` crate (already in workspace dependencies).
//!
//! # Strategy
//!
//! Remote usage from API response is always preferred. When the API does not
//! return usage metadata (e.g., local models, streaming), tiktoken is used as
//! a fallback for accurate counting.
use crate::error::{AgentError, Result};
/// Token usage data. Use `from_remote()` when the API returns usage info,
/// or `from_estimate()` when falling back to tiktoken.
#[derive(Debug, Clone, Copy, Default, serde::Serialize, serde::Deserialize)]
pub struct TokenUsage {
pub input_tokens: i64,
pub output_tokens: i64,
}
impl TokenUsage {
/// Create from remote API usage data. Returns `None` if all values are zero
/// (some providers return zeroed usage on error).
pub fn from_remote(prompt_tokens: u32, completion_tokens: u32) -> Option<Self> {
if prompt_tokens == 0 && completion_tokens == 0 {
None
} else {
Some(Self {
input_tokens: prompt_tokens as i64,
output_tokens: completion_tokens as i64,
})
}
}
/// Create from tiktoken estimate.
pub fn from_estimate(input_tokens: usize, output_tokens: usize) -> Self {
Self {
input_tokens: input_tokens as i64,
output_tokens: output_tokens as i64,
}
}
pub fn total(&self) -> i64 {
self.input_tokens + self.output_tokens
}
}
/// Resolve token usage: remote data is preferred, tiktoken is the fallback.
///
/// `remote` — `Some` when API returned usage; `None` when not available.
/// `model` — model name, required for tiktoken fallback.
/// `input_text` — input text length hint for fallback estimate (uses ~4 chars/token).
pub fn resolve_usage(
remote: Option<TokenUsage>,
model: &str,
input_text: &str,
output_text: &str,
) -> TokenUsage {
if let Some(usage) = remote {
return usage;
}
// Fallback: tiktoken estimate
let input = count_message_text(input_text, model).unwrap_or_else(|_| {
// Rough estimate: ~4 chars per token
(input_text.len() / 4).max(1)
});
let output = output_text.len() / 4;
TokenUsage::from_estimate(input, output)
}
/// Estimate the number of tokens in a text string using the appropriate tokenizer.
pub fn count_text(text: &str, model: &str) -> Result<usize> {
let bpe = get_tokenizer(model)?;
// Use encode_ordinary since we're counting raw text, not chat messages
let tokens = bpe.encode_ordinary(text);
Ok(tokens.len())
}
/// Count tokens in a single chat message (text content only).
pub fn count_message_text(text: &str, model: &str) -> Result<usize> {
let bpe = get_tokenizer(model)?;
// For messages, use encode_with_special_tokens to count role/separator tokens
let tokens = bpe.encode_with_special_tokens(text);
Ok(tokens.len())
}
/// Estimate the maximum number of characters that fit within a token budget
/// given a model's context limit and a reserve for the output.
///
/// Uses a rough estimate of ~4 characters per token (typical for English text).
/// For non-Latin scripts, this is less accurate.
pub fn estimate_max_chars(
_model: &str,
context_limit: usize,
reserve_output_tokens: usize,
) -> Result<usize> {
let chars_per_token = 4;
// Subtract reserve for output, system overhead, and a safety margin (10%)
let safe_limit = context_limit
.saturating_sub(reserve_output_tokens)
.saturating_sub(512); // 512 token safety margin
Ok(safe_limit.saturating_mul(chars_per_token))
}
/// Truncate text to fit within a token budget for a given model.
pub fn truncate_to_token_budget(
text: &str,
model: &str,
context_limit: usize,
reserve_output_tokens: usize,
) -> Result<String> {
let max_chars = estimate_max_chars(model, context_limit, reserve_output_tokens)?;
if text.len() <= max_chars {
return Ok(text.to_string());
}
// Binary search for the exact character boundary that fits the token budget
let bpe = get_tokenizer(model)?;
let mut low = 0usize;
let mut high = text.len();
let mut result = text.to_string();
while low + 100 < high {
let mid = (low + high) / 2;
let candidate = &text[..mid];
let tokens = bpe.encode_ordinary(candidate);
if tokens.len() <= safe_token_budget(context_limit, reserve_output_tokens) {
result = candidate.to_string();
low = mid;
} else {
high = mid;
}
}
Ok(result)
}
/// Returns the safe token budget (context limit minus reserve and margin).
fn safe_token_budget(context_limit: usize, reserve: usize) -> usize {
context_limit.saturating_sub(reserve).saturating_sub(512)
}
/// Get the appropriate tiktoken tokenizer for a model.
///
/// Model name mapping:
/// - "gpt-4o", "o1", "o3", "o4" → o200k_base
/// - "claude-*", "gpt-3.5-turbo", "gpt-4" → cl100k_base
/// - Unknown → cl100k_base (safe fallback)
fn get_tokenizer(model: &str) -> Result<tiktoken_rs::CoreBPE> {
use tiktoken_rs;
// Try model-specific tokenizer first
if let Ok(bpe) = tiktoken_rs::get_bpe_from_model(model) {
return Ok(bpe);
}
// Fallback: use cl100k_base for unknown models
tiktoken_rs::cl100k_base()
.map_err(|e| AgentError::Internal(format!("Failed to init tokenizer: {}", e)))
}
/// Estimate tokens for a simple prefix/suffix pattern (e.g., "assistant\n" + text).
/// Returns the token count including the prefix.
pub fn count_with_prefix(text: &str, prefix: &str, model: &str) -> Result<usize> {
let bpe = get_tokenizer(model)?;
let prefixed = format!("{}{}", prefix, text);
let tokens = bpe.encode_with_special_tokens(&prefixed);
Ok(tokens.len())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_count_text() {
let count = count_text("Hello, world!", "gpt-4").unwrap();
assert!(count > 0);
}
#[test]
fn test_estimate_max_chars() {
// gpt-4o context ~128k tokens
let chars = estimate_max_chars("gpt-4o", 128_000, 2048).unwrap();
assert!(chars > 0);
}
#[test]
fn test_truncate() {
// 50k chars exceeds budget: 8192 - 512 - 512 = 7168 tokens → ~28k chars
let long_text = "a".repeat(50000);
let truncated = truncate_to_token_budget(&long_text, "gpt-4o", 8192, 512).unwrap();
assert!(truncated.len() < long_text.len());
}
}

108
libs/agent/tool/call.rs Normal file
View File

@ -0,0 +1,108 @@
//! Tool call and result types.
use serde::{Deserialize, Serialize};
/// A single tool invocation requested by the AI model.
#[derive(Debug, Clone)]
pub struct ToolCall {
pub id: String,
pub name: String,
pub arguments: String,
}
impl ToolCall {
pub fn arguments_json(&self) -> serde_json::Result<serde_json::Value> {
serde_json::from_str(&self.arguments)
}
pub fn parse_args<T: serde::de::DeserializeOwned>(&self) -> serde_json::Result<T> {
serde_json::from_str(&self.arguments)
}
}
/// The result of executing a tool call.
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum ToolResult {
/// Successful result with a JSON value.
Ok(serde_json::Value),
/// Error result with an error message.
Error(String),
}
impl ToolResult {
pub fn ok<T: Serialize>(value: T) -> Self {
Self::Ok(serde_json::to_value(value).unwrap_or(serde_json::Value::Null))
}
pub fn error(message: impl Into<String>) -> Self {
Self::Error(message.into())
}
pub fn is_error(&self) -> bool {
matches!(self, Self::Error(_))
}
}
/// Errors that can occur during tool execution.
#[derive(Debug, thiserror::Error)]
pub enum ToolError {
#[error("tool not found: {0}")]
NotFound(String),
#[error("argument parse error: {0}")]
ParseError(String),
#[error("execution error: {0}")]
ExecutionError(String),
#[error("recursion limit exceeded (max depth: {max_depth})")]
RecursionLimitExceeded { max_depth: u32 },
#[error("max tool calls exceeded: {0}")]
MaxToolCallsExceeded(usize),
#[error("internal error: {0}")]
Internal(String),
}
impl ToolError {
pub fn into_result(self) -> ToolResult {
ToolResult::Error(self.to_string())
}
}
impl From<serde_json::Error> for ToolError {
fn from(e: serde_json::Error) -> Self {
Self::ParseError(e.to_string())
}
}
/// A completed tool call with its result, ready to be sent back to the AI.
#[derive(Debug, Clone)]
pub struct ToolCallResult {
/// The original tool call.
pub call: ToolCall,
/// The execution result.
pub result: ToolResult,
}
impl ToolCallResult {
pub fn ok(call: ToolCall, value: serde_json::Value) -> Self {
Self {
call,
result: ToolResult::Ok(value),
}
}
pub fn error(call: ToolCall, message: impl Into<String>) -> Self {
Self {
call,
result: ToolResult::Error(message.into()),
}
}
pub fn from_result(call: ToolCall, result: ToolResult) -> Self {
Self { call, result }
}
}

Some files were not shown because too many files have changed in this diff Show More