gitdataai/libs/git/hook/mod.rs
ZhenYi 8fb2436f22 feat(git): add Redis-backed hook worker with per-repo distributed locking
- pool/worker.rs: single-threaded consumer that BLMPOPs from Redis queues
  sequentially. K8s replicas provide HA — each pod runs one worker.
- pool/redis.rs: RedisConsumer with BLMOVE atomic dequeue, ACK/NAK, and
  retry-with-json support.
- pool/types.rs: HookTask, TaskType, PoolConfig (minimal — no pool metrics).
- sync/lock.rs: Redis SET NX EX per-repo lock to prevent concurrent workers
  from processing the same repo. Lock conflicts are handled by requeueing
  without incrementing retry count.
- hook/mod.rs: HookService.start_worker() spawns the background worker.
- ssh/mod.rs / http/mod.rs: ReceiveSyncService RPUSHes to Redis queue.
  Both run_http and run_ssh call start_worker() to launch the consumer.
- Lock conflicts (GitError::Locked) in the worker are requeued without
  incrementing retry_count so another worker can pick them up.
2026-04-17 12:33:58 +08:00

60 lines
1.5 KiB
Rust

use config::AppConfig;
use db::cache::AppCache;
use db::database::AppDatabase;
use deadpool_redis::cluster::Pool as RedisPool;
use slog::Logger;
use std::sync::Arc;
use tokio_util::sync::CancellationToken;
pub mod pool;
pub mod sync;
pub mod webhook_dispatch;
pub use pool::{HookWorker, PoolConfig, RedisConsumer};
pub use pool::types::{HookTask, TaskType};
/// Hook service that manages the Redis-backed task queue worker.
/// Multiple gitserver pods can run concurrently — the worker acquires a
/// per-repo Redis lock before processing each task.
#[derive(Clone)]
pub struct HookService {
pub(crate) db: AppDatabase,
pub(crate) cache: AppCache,
pub(crate) redis_pool: RedisPool,
pub(crate) logger: Logger,
pub(crate) config: AppConfig,
pub(crate) http: Arc<reqwest::Client>,
}
impl HookService {
pub fn new(
db: AppDatabase,
cache: AppCache,
redis_pool: RedisPool,
logger: Logger,
config: AppConfig,
http: Arc<reqwest::Client>,
) -> Self {
Self {
db,
cache,
redis_pool,
logger,
config,
http,
}
}
/// Start the background worker and return a cancellation token.
pub fn start_worker(&self) -> CancellationToken {
let pool_config = PoolConfig::from_env(&self.config);
pool::start_worker(
self.db.clone(),
self.cache.clone(),
self.redis_pool.clone(),
self.logger.clone(),
pool_config,
)
}
}