diff --git a/OKComputer_Optimiser UI_UX.zip b/OKComputer_Optimiser UI_UX.zip deleted file mode 100644 index d6dbc99..0000000 Binary files a/OKComputer_Optimiser UI_UX.zip and /dev/null differ diff --git a/README.md b/README.md index 6425f0d..07070de 100644 --- a/README.md +++ b/README.md @@ -27,6 +27,14 @@ Une application moderne et professionnelle pour la gestion automatisée d'homela - **Actions Rapides** : Upgrade, Reboot, Health-check, Backup en un clic - **Groupes Ansible** : Sélection des cibles par groupe (proxmox, lab, prod, etc.) +### Planificateur (Scheduler) +- **Planification de Playbooks** : Exécution automatique des playbooks (unique ou récurrente) +- **Interface Multi-étapes** : Création intuitive de schedules en 3 étapes +- **Vue Liste/Calendrier** : Visualisation des schedules planifiés +- **Expression Cron** : Support des expressions cron personnalisées +- **Historique d'exécution** : Traçabilité complète de chaque run +- **Intégration Dashboard** : Widget des prochaines exécutions + ## 🛠️ Technologies Utilisées ### Frontend @@ -41,6 +49,8 @@ Une application moderne et professionnelle pour la gestion automatisée d'homela - **Pydantic** : Validation de données - **WebSockets** : Communication temps réel - **Uvicorn** : Serveur ASGI performant +- **APScheduler** : Planification de tâches en arrière-plan +- **Croniter** : Parsing d'expressions cron ## 📁 Structure du Projet @@ -53,6 +63,7 @@ homelab-automation-api-v2/ │ └── requirements.txt # Dépendances Python ├── ansible/ │ ├── ansible.cfg # Configuration Ansible +│ ├── .host_status.json # Cache JSON de l'état des hôtes │ ├── inventory/ │ │ └── hosts.yml # Inventaire des hôtes │ ├── group_vars/ @@ -62,9 +73,40 @@ homelab-automation-api-v2/ │ ├── vm-reboot.yml # Redémarrage des hôtes │ ├── health-check.yml # Vérification de santé │ └── backup-config.yml # Sauvegarde de configuration +├── tasks_logs/ +│ ├── .bootstrap_status.json # État du bootstrap Ansible/SSH par hôte +│ ├── .schedule_runs.json # Historique des exécutions de schedules +│ └── 2025/ +│ └── 12/ +│ └── *.json # Logs détaillés des tâches/schedules par date +├── test_schedule.json # Exemple de définition de schedule └── README.md # Documentation ``` +### Fichiers JSON requis + +Les fichiers JSON suivants sont utilisés par l'application pour stocker l'état et l'historique des opérations : + +- **`ansible/.host_status.json`** + - Contient l'état courant des hôtes connus (reachability, dernier health-check, etc.). + - Utilisé par l'API et le dashboard pour afficher rapidement le statut des machines sans relancer un scan complet. + +- **`tasks_logs/.bootstrap_status.json`** + - Mémorise l'état du *bootstrap* Ansible/SSH par hôte (succès, échec, dernier message). + - Permet au dashboard d'indiquer si un hôte est prêt pour l'exécution de playbooks. + +- **`tasks_logs/.schedule_runs.json`** + - Journalise les exécutions des *schedules* (id du schedule, heure de run, résultat, durée, message d'erreur éventuel). + - Sert de source de vérité pour l'historique affiché dans l'interface Planificateur. + +- **`tasks_logs///*.json`** + - Fichiers horodatés contenant les logs détaillés de chaque exécution de tâche/schedule. + - Permettent de tracer finement ce qui s'est passé pour chaque run (playbook lancé, cible, sortie Ansible, statut). + +- **`test_schedule.json`** + - Exemple de définition de schedule utilisé pour les tests et la validation de l'API de planification. + - Peut servir de modèle pour comprendre la structure JSON attendue lors de la création de nouveaux schedules. + ## 🚀 Installation et Lancement ### Prérequis @@ -175,6 +217,19 @@ curl -H "X-API-Key: dev-key-12345" http://localhost:8000/api/hosts - `POST /api/ansible/bootstrap` - Bootstrap un hôte pour Ansible (crée user, SSH, sudo, Python) - `GET /api/ansible/ssh-config` - Diagnostic de la configuration SSH +**Planificateur (Schedules)** +- `GET /api/schedules` - Liste tous les schedules +- `POST /api/schedules` - Crée un nouveau schedule +- `GET /api/schedules/{id}` - Récupère un schedule spécifique +- `PUT /api/schedules/{id}` - Met à jour un schedule +- `DELETE /api/schedules/{id}` - Supprime un schedule +- `POST /api/schedules/{id}/run` - Exécution forcée immédiate +- `POST /api/schedules/{id}/pause` - Met en pause un schedule +- `POST /api/schedules/{id}/resume` - Reprend un schedule +- `GET /api/schedules/{id}/runs` - Historique des exécutions +- `GET /api/schedules/stats` - Statistiques globales +- `POST /api/schedules/validate-cron` - Valide une expression cron + #### Exemples d'utilisation Ansible **Lister les playbooks disponibles :** @@ -214,6 +269,60 @@ curl -X POST -H "X-API-Key: dev-key-12345" -H "Content-Type: application/json" \ http://localhost:8000/api/ansible/adhoc ``` +#### Exemples d'utilisation du Planificateur + +**Créer un schedule quotidien :** +```bash +curl -X POST -H "X-API-Key: dev-key-12345" -H "Content-Type: application/json" \ + -d '{ + "name": "Backup quotidien", + "playbook": "backup-config.yml", + "target": "all", + "schedule_type": "recurring", + "recurrence": {"type": "daily", "time": "02:00"}, + "tags": ["Backup", "Production"] + }' \ + http://localhost:8000/api/schedules +``` + +**Créer un schedule hebdomadaire (lundi et vendredi) :** +```bash +curl -X POST -H "X-API-Key: dev-key-12345" -H "Content-Type: application/json" \ + -d '{ + "name": "Health check bi-hebdo", + "playbook": "health-check.yml", + "target": "proxmox", + "schedule_type": "recurring", + "recurrence": {"type": "weekly", "time": "08:00", "days": [1, 5]} + }' \ + http://localhost:8000/api/schedules +``` + +**Créer un schedule avec expression cron :** +```bash +curl -X POST -H "X-API-Key: dev-key-12345" -H "Content-Type: application/json" \ + -d '{ + "name": "Maintenance mensuelle", + "playbook": "vm-upgrade.yml", + "target": "lab", + "schedule_type": "recurring", + "recurrence": {"type": "custom", "cron_expression": "0 3 1 * *"} + }' \ + http://localhost:8000/api/schedules +``` + +**Lancer un schedule immédiatement :** +```bash +curl -X POST -H "X-API-Key: dev-key-12345" \ + http://localhost:8000/api/schedules/{schedule_id}/run +``` + +**Voir l'historique des exécutions :** +```bash +curl -H "X-API-Key: dev-key-12345" \ + http://localhost:8000/api/schedules/{schedule_id}/runs +``` + ### Documentation API - **Swagger UI** : `http://localhost:8000/api/docs` diff --git a/ansible/.host_status.json b/ansible/.host_status.json new file mode 100644 index 0000000..213a4da --- /dev/null +++ b/ansible/.host_status.json @@ -0,0 +1,3 @@ +{ + "hosts": {} +} \ No newline at end of file diff --git a/app/app_optimized.py b/app/app_optimized.py index b0c026f..11506f7 100644 --- a/app/app_optimized.py +++ b/app/app_optimized.py @@ -3,7 +3,7 @@ Homelab Automation Dashboard - Backend Optimisé API REST moderne avec FastAPI pour la gestion d'homelab """ -from datetime import datetime, timezone +from datetime import datetime, timezone, timedelta from pathlib import Path from time import perf_counter, time import os @@ -17,6 +17,16 @@ from typing import Literal, Any, List, Dict, Optional from threading import Lock import asyncio import json +import uuid + +# APScheduler imports +from apscheduler.schedulers.asyncio import AsyncIOScheduler +from apscheduler.triggers.cron import CronTrigger +from apscheduler.triggers.date import DateTrigger +from apscheduler.jobstores.memory import MemoryJobStore +from apscheduler.executors.asyncio import AsyncIOExecutor +from croniter import croniter +import pytz from fastapi import FastAPI, HTTPException, Depends, Request, Form, WebSocket, WebSocketDisconnect from fastapi.responses import HTMLResponse, JSONResponse, FileResponse @@ -322,6 +332,135 @@ class TasksFilterParams(BaseModel): search: Optional[str] = None +# ===== MODÈLES PLANIFICATEUR (SCHEDULER) ===== + +class ScheduleRecurrence(BaseModel): + """Configuration de récurrence pour un schedule""" + type: Literal["daily", "weekly", "monthly", "custom"] = "daily" + time: str = Field(default="02:00", description="Heure d'exécution HH:MM") + days: Optional[List[int]] = Field(default=None, description="Jours de la semaine (1-7, lundi=1) pour weekly") + day_of_month: Optional[int] = Field(default=None, ge=1, le=31, description="Jour du mois (1-31) pour monthly") + cron_expression: Optional[str] = Field(default=None, description="Expression cron pour custom") + + +class Schedule(BaseModel): + """Modèle d'un schedule de playbook""" + id: str = Field(default_factory=lambda: f"sched_{uuid.uuid4().hex[:12]}") + name: str = Field(..., min_length=3, max_length=100, description="Nom du schedule") + description: Optional[str] = Field(default=None, max_length=500) + playbook: str = Field(..., description="Nom du playbook à exécuter") + target_type: Literal["group", "host"] = Field(default="group", description="Type de cible") + target: str = Field(default="all", description="Nom du groupe ou hôte cible") + extra_vars: Optional[Dict[str, Any]] = Field(default=None, description="Variables supplémentaires") + schedule_type: Literal["once", "recurring"] = Field(default="recurring") + recurrence: Optional[ScheduleRecurrence] = Field(default=None) + timezone: str = Field(default="America/Montreal", description="Fuseau horaire") + start_at: Optional[datetime] = Field(default=None, description="Date de début (optionnel)") + end_at: Optional[datetime] = Field(default=None, description="Date de fin (optionnel)") + next_run_at: Optional[datetime] = Field(default=None, description="Prochaine exécution calculée") + last_run_at: Optional[datetime] = Field(default=None, description="Dernière exécution") + last_status: Literal["success", "failed", "running", "never"] = Field(default="never") + enabled: bool = Field(default=True, description="Schedule actif ou en pause") + retry_on_failure: int = Field(default=0, ge=0, le=3, description="Nombre de tentatives en cas d'échec") + timeout: int = Field(default=3600, ge=60, le=86400, description="Timeout en secondes") + tags: List[str] = Field(default=[], description="Tags pour catégorisation") + run_count: int = Field(default=0, description="Nombre total d'exécutions") + success_count: int = Field(default=0, description="Nombre de succès") + failure_count: int = Field(default=0, description="Nombre d'échecs") + created_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc)) + updated_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc)) + + class Config: + json_encoders = { + datetime: lambda v: v.isoformat() if v else None + } + + @field_validator('recurrence', mode='before') + @classmethod + def validate_recurrence(cls, v, info): + # Si schedule_type est 'once', recurrence n'est pas obligatoire + return v + + +class ScheduleRun(BaseModel): + """Historique d'une exécution de schedule""" + id: str = Field(default_factory=lambda: f"run_{uuid.uuid4().hex[:12]}") + schedule_id: str = Field(..., description="ID du schedule parent") + task_id: Optional[int] = Field(default=None, description="ID de la tâche créée") + started_at: datetime = Field(default_factory=lambda: datetime.now(timezone.utc)) + finished_at: Optional[datetime] = Field(default=None) + status: Literal["running", "success", "failed", "canceled"] = Field(default="running") + duration_seconds: Optional[float] = Field(default=None) + hosts_impacted: int = Field(default=0) + error_message: Optional[str] = Field(default=None) + retry_attempt: int = Field(default=0, description="Numéro de la tentative (0 = première)") + + class Config: + json_encoders = { + datetime: lambda v: v.isoformat() if v else None + } + + +class ScheduleCreateRequest(BaseModel): + """Requête de création d'un schedule""" + name: str = Field(..., min_length=3, max_length=100) + description: Optional[str] = Field(default=None, max_length=500) + playbook: str = Field(...) + target_type: Literal["group", "host"] = Field(default="group") + target: str = Field(default="all") + extra_vars: Optional[Dict[str, Any]] = Field(default=None) + schedule_type: Literal["once", "recurring"] = Field(default="recurring") + recurrence: Optional[ScheduleRecurrence] = Field(default=None) + timezone: str = Field(default="America/Montreal") + start_at: Optional[datetime] = Field(default=None) + end_at: Optional[datetime] = Field(default=None) + enabled: bool = Field(default=True) + retry_on_failure: int = Field(default=0, ge=0, le=3) + timeout: int = Field(default=3600, ge=60, le=86400) + tags: List[str] = Field(default=[]) + + @field_validator('timezone') + @classmethod + def validate_timezone(cls, v: str) -> str: + try: + pytz.timezone(v) + return v + except pytz.exceptions.UnknownTimeZoneError: + raise ValueError(f"Fuseau horaire invalide: {v}") + + +class ScheduleUpdateRequest(BaseModel): + """Requête de mise à jour d'un schedule""" + name: Optional[str] = Field(default=None, min_length=3, max_length=100) + description: Optional[str] = Field(default=None, max_length=500) + playbook: Optional[str] = Field(default=None) + target_type: Optional[Literal["group", "host"]] = Field(default=None) + target: Optional[str] = Field(default=None) + extra_vars: Optional[Dict[str, Any]] = Field(default=None) + schedule_type: Optional[Literal["once", "recurring"]] = Field(default=None) + recurrence: Optional[ScheduleRecurrence] = Field(default=None) + timezone: Optional[str] = Field(default=None) + start_at: Optional[datetime] = Field(default=None) + end_at: Optional[datetime] = Field(default=None) + enabled: Optional[bool] = Field(default=None) + retry_on_failure: Optional[int] = Field(default=None, ge=0, le=3) + timeout: Optional[int] = Field(default=None, ge=60, le=86400) + tags: Optional[List[str]] = Field(default=None) + + +class ScheduleStats(BaseModel): + """Statistiques globales des schedules""" + total: int = 0 + active: int = 0 + paused: int = 0 + expired: int = 0 + next_execution: Optional[datetime] = None + next_schedule_name: Optional[str] = None + failures_24h: int = 0 + executions_24h: int = 0 + success_rate_7d: float = 0.0 + + # ===== SERVICE DE LOGGING MARKDOWN ===== class TaskLogService: @@ -993,11 +1132,736 @@ class HostStatusService: return False +# ===== SERVICE PLANIFICATEUR (SCHEDULER) ===== + +SCHEDULES_FILE = DIR_LOGS_TASKS / ".schedules.json" +SCHEDULE_RUNS_FILE = DIR_LOGS_TASKS / ".schedule_runs.json" + + +class SchedulerService: + """Service pour gérer les schedules de playbooks avec APScheduler""" + + def __init__(self, schedules_file: Path, runs_file: Path): + self.schedules_file = schedules_file + self.runs_file = runs_file + self._ensure_files() + + # Configurer APScheduler + jobstores = {'default': MemoryJobStore()} + executors = {'default': AsyncIOExecutor()} + job_defaults = {'coalesce': True, 'max_instances': 1, 'misfire_grace_time': 300} + + self.scheduler = AsyncIOScheduler( + jobstores=jobstores, + executors=executors, + job_defaults=job_defaults, + timezone=pytz.UTC + ) + self._started = False + + def _ensure_files(self): + """Crée les fichiers de données s'ils n'existent pas""" + self.schedules_file.parent.mkdir(parents=True, exist_ok=True) + if not self.schedules_file.exists(): + self._save_schedules([]) + if not self.runs_file.exists(): + self._save_runs([]) + + def _load_schedules(self) -> List[Dict]: + """Charge les schedules depuis le fichier""" + try: + with open(self.schedules_file, 'r', encoding='utf-8') as f: + data = json.load(f) + return data.get("schedules", []) if isinstance(data, dict) else data + except: + return [] + + def _save_schedules(self, schedules: List[Dict]): + """Sauvegarde les schedules dans le fichier""" + with open(self.schedules_file, 'w', encoding='utf-8') as f: + json.dump({"schedules": schedules}, f, indent=2, default=str, ensure_ascii=False) + + def _load_runs(self) -> List[Dict]: + """Charge l'historique des exécutions""" + try: + with open(self.runs_file, 'r', encoding='utf-8') as f: + data = json.load(f) + return data.get("runs", []) if isinstance(data, dict) else data + except: + return [] + + def _save_runs(self, runs: List[Dict]): + """Sauvegarde l'historique des exécutions""" + # Garder seulement les 1000 dernières exécutions + runs = runs[:1000] + with open(self.runs_file, 'w', encoding='utf-8') as f: + json.dump({"runs": runs}, f, indent=2, default=str, ensure_ascii=False) + + def start(self): + """Démarre le scheduler et charge tous les schedules actifs""" + if not self._started: + self.scheduler.start() + self._started = True + # Charger les schedules actifs + self._load_active_schedules() + print("📅 Scheduler démarré avec succès") + + def shutdown(self): + """Arrête le scheduler proprement""" + if self._started: + self.scheduler.shutdown(wait=False) + self._started = False + + def _load_active_schedules(self): + """Charge tous les schedules actifs dans APScheduler""" + schedules = self._load_schedules() + for sched_data in schedules: + if sched_data.get('enabled', True): + try: + schedule = Schedule(**sched_data) + self._add_job_for_schedule(schedule) + except Exception as e: + print(f"Erreur chargement schedule {sched_data.get('id')}: {e}") + + def _build_cron_trigger(self, schedule: Schedule) -> Optional[CronTrigger]: + """Construit un trigger cron à partir de la configuration du schedule""" + if schedule.schedule_type == "once": + return None + + recurrence = schedule.recurrence + if not recurrence: + return None + + tz = pytz.timezone(schedule.timezone) + hour, minute = recurrence.time.split(':') if recurrence.time else ("2", "0") + + try: + if recurrence.type == "daily": + return CronTrigger(hour=int(hour), minute=int(minute), timezone=tz) + + elif recurrence.type == "weekly": + # Convertir jours (1-7 lundi=1) en format cron (0-6 lundi=0) + days = recurrence.days or [1] + day_of_week = ','.join(str(d - 1) for d in days) + return CronTrigger(day_of_week=day_of_week, hour=int(hour), minute=int(minute), timezone=tz) + + elif recurrence.type == "monthly": + day = recurrence.day_of_month or 1 + return CronTrigger(day=day, hour=int(hour), minute=int(minute), timezone=tz) + + elif recurrence.type == "custom" and recurrence.cron_expression: + # Parser l'expression cron + parts = recurrence.cron_expression.split() + if len(parts) == 5: + return CronTrigger.from_crontab(recurrence.cron_expression, timezone=tz) + else: + # Expression cron étendue (6 champs avec secondes) + return CronTrigger( + second=parts[0] if len(parts) > 5 else '0', + minute=parts[0] if len(parts) == 5 else parts[1], + hour=parts[1] if len(parts) == 5 else parts[2], + day=parts[2] if len(parts) == 5 else parts[3], + month=parts[3] if len(parts) == 5 else parts[4], + day_of_week=parts[4] if len(parts) == 5 else parts[5], + timezone=tz + ) + except Exception as e: + print(f"Erreur construction trigger cron: {e}") + return None + + return None + + def _add_job_for_schedule(self, schedule: Schedule): + """Ajoute un job APScheduler pour un schedule""" + job_id = f"schedule_{schedule.id}" + + # Supprimer le job existant s'il existe + try: + self.scheduler.remove_job(job_id) + except: + pass + + if schedule.schedule_type == "once": + # Exécution unique + if schedule.start_at and schedule.start_at > datetime.now(timezone.utc): + trigger = DateTrigger(run_date=schedule.start_at, timezone=pytz.UTC) + self.scheduler.add_job( + self._execute_schedule, + trigger, + id=job_id, + args=[schedule.id], + replace_existing=True + ) + else: + # Exécution récurrente + trigger = self._build_cron_trigger(schedule) + if trigger: + self.scheduler.add_job( + self._execute_schedule, + trigger, + id=job_id, + args=[schedule.id], + replace_existing=True + ) + + # Calculer et mettre à jour next_run_at + self._update_next_run(schedule.id) + + def _update_next_run(self, schedule_id: str): + """Met à jour le champ next_run_at d'un schedule""" + job_id = f"schedule_{schedule_id}" + try: + job = self.scheduler.get_job(job_id) + if job and job.next_run_time: + schedules = self._load_schedules() + for s in schedules: + if s['id'] == schedule_id: + s['next_run_at'] = job.next_run_time.isoformat() + break + self._save_schedules(schedules) + except: + pass + + async def _execute_schedule(self, schedule_id: str): + """Exécute un schedule (appelé par APScheduler)""" + # Import circulaire évité en utilisant les variables globales + global ws_manager, ansible_service, db, task_log_service + + schedules = self._load_schedules() + sched_data = next((s for s in schedules if s['id'] == schedule_id), None) + + if not sched_data: + print(f"Schedule {schedule_id} non trouvé") + return + + schedule = Schedule(**sched_data) + + # Vérifier si le schedule est encore actif + if not schedule.enabled: + return + + # Vérifier la fenêtre temporelle + now = datetime.now(timezone.utc) + if schedule.end_at and now > schedule.end_at: + # Schedule expiré, le désactiver + schedule.enabled = False + self._update_schedule_in_storage(schedule) + return + + # Créer un ScheduleRun + run = ScheduleRun(schedule_id=schedule_id) + runs = self._load_runs() + runs.insert(0, run.dict()) + self._save_runs(runs) + + # Mettre à jour le schedule + schedule.last_run_at = now + schedule.last_status = "running" + schedule.run_count += 1 + self._update_schedule_in_storage(schedule) + + # Notifier via WebSocket + try: + await ws_manager.broadcast({ + "type": "schedule_run_started", + "data": { + "schedule_id": schedule_id, + "schedule_name": schedule.name, + "run": run.dict(), + "status": "running" + } + }) + except: + pass + + # Créer une tâche + task_id = db.get_next_id("tasks") + playbook_name = schedule.playbook.replace('.yml', '').replace('-', ' ').title() + task = Task( + id=task_id, + name=f"[Planifié] {playbook_name}", + host=schedule.target, + status="running", + progress=0, + start_time=now + ) + db.tasks.insert(0, task) + + # Mettre à jour le run avec le task_id + run.task_id = task_id + runs = self._load_runs() + for r in runs: + if r['id'] == run.id: + r['task_id'] = task_id + break + self._save_runs(runs) + + # Notifier la création de tâche + try: + await ws_manager.broadcast({ + "type": "task_created", + "data": task.dict() + }) + except: + pass + + # Exécuter le playbook + start_time = perf_counter() + try: + result = await ansible_service.execute_playbook( + playbook=schedule.playbook, + target=schedule.target, + extra_vars=schedule.extra_vars, + check_mode=False, + verbose=True + ) + + execution_time = perf_counter() - start_time + success = result.get("success", False) + + # Mettre à jour la tâche + task.status = "completed" if success else "failed" + task.progress = 100 + task.end_time = datetime.now(timezone.utc) + task.duration = f"{execution_time:.1f}s" + task.output = result.get("stdout", "") + task.error = result.get("stderr", "") if not success else None + + # Mettre à jour le run + run.status = "success" if success else "failed" + run.finished_at = datetime.now(timezone.utc) + run.duration_seconds = execution_time + run.error_message = result.get("stderr", "") if not success else None + + # Compter les hôtes impactés + stdout = result.get("stdout", "") + host_count = len(re.findall(r'^[a-zA-Z0-9][a-zA-Z0-9._-]+\s*:\s*ok=', stdout, re.MULTILINE)) + run.hosts_impacted = host_count + + # Mettre à jour le schedule + schedule.last_status = "success" if success else "failed" + if success: + schedule.success_count += 1 + else: + schedule.failure_count += 1 + + # Sauvegarder + self._update_schedule_in_storage(schedule) + runs = self._load_runs() + for r in runs: + if r['id'] == run.id: + r.update(run.dict()) + break + self._save_runs(runs) + + # Sauvegarder le log markdown + try: + task_log_service.save_task_log( + task=task, + output=result.get("stdout", ""), + error=result.get("stderr", "") + ) + except: + pass + + # Notifier + await ws_manager.broadcast({ + "type": "schedule_run_finished", + "data": { + "schedule_id": schedule_id, + "schedule_name": schedule.name, + "run": run.dict(), + "status": run.status, + "success": success + } + }) + + await ws_manager.broadcast({ + "type": "task_completed", + "data": { + "id": task_id, + "status": task.status, + "progress": 100, + "duration": task.duration, + "success": success + } + }) + + # Log + log_entry = LogEntry( + id=db.get_next_id("logs"), + timestamp=datetime.now(timezone.utc), + level="INFO" if success else "ERROR", + message=f"Schedule '{schedule.name}' exécuté: {'succès' if success else 'échec'}", + source="scheduler", + host=schedule.target + ) + db.logs.insert(0, log_entry) + + except Exception as e: + # Échec de l'exécution + execution_time = perf_counter() - start_time + + task.status = "failed" + task.end_time = datetime.now(timezone.utc) + task.error = str(e) + + run.status = "failed" + run.finished_at = datetime.now(timezone.utc) + run.duration_seconds = execution_time + run.error_message = str(e) + + schedule.last_status = "failed" + schedule.failure_count += 1 + + self._update_schedule_in_storage(schedule) + runs = self._load_runs() + for r in runs: + if r['id'] == run.id: + r.update(run.dict()) + break + self._save_runs(runs) + + try: + task_log_service.save_task_log(task=task, error=str(e)) + except: + pass + + try: + await ws_manager.broadcast({ + "type": "schedule_run_finished", + "data": { + "schedule_id": schedule_id, + "run": run.dict(), + "status": "failed", + "error": str(e) + } + }) + + await ws_manager.broadcast({ + "type": "task_failed", + "data": {"id": task_id, "status": "failed", "error": str(e)} + }) + except: + pass + + log_entry = LogEntry( + id=db.get_next_id("logs"), + timestamp=datetime.now(timezone.utc), + level="ERROR", + message=f"Erreur schedule '{schedule.name}': {str(e)}", + source="scheduler", + host=schedule.target + ) + db.logs.insert(0, log_entry) + + # Mettre à jour next_run_at + self._update_next_run(schedule_id) + + def _update_schedule_in_storage(self, schedule: Schedule): + """Met à jour un schedule dans le stockage""" + schedule.updated_at = datetime.now(timezone.utc) + schedules = self._load_schedules() + for i, s in enumerate(schedules): + if s['id'] == schedule.id: + schedules[i] = schedule.dict() + break + self._save_schedules(schedules) + + # ===== MÉTHODES PUBLIQUES CRUD ===== + + def get_all_schedules(self, + enabled: Optional[bool] = None, + playbook: Optional[str] = None, + tag: Optional[str] = None) -> List[Schedule]: + """Récupère tous les schedules avec filtrage optionnel""" + schedules_data = self._load_schedules() + schedules = [] + + for s in schedules_data: + try: + schedule = Schedule(**s) + + # Filtres + if enabled is not None and schedule.enabled != enabled: + continue + if playbook and playbook.lower() not in schedule.playbook.lower(): + continue + if tag and tag not in schedule.tags: + continue + + schedules.append(schedule) + except: + continue + + # Trier par prochaine exécution + schedules.sort(key=lambda x: x.next_run_at or datetime.max.replace(tzinfo=timezone.utc)) + return schedules + + def get_schedule(self, schedule_id: str) -> Optional[Schedule]: + """Récupère un schedule par ID""" + schedules = self._load_schedules() + for s in schedules: + if s['id'] == schedule_id: + return Schedule(**s) + return None + + def create_schedule(self, request: ScheduleCreateRequest) -> Schedule: + """Crée un nouveau schedule""" + schedule = Schedule( + name=request.name, + description=request.description, + playbook=request.playbook, + target_type=request.target_type, + target=request.target, + extra_vars=request.extra_vars, + schedule_type=request.schedule_type, + recurrence=request.recurrence, + timezone=request.timezone, + start_at=request.start_at, + end_at=request.end_at, + enabled=request.enabled, + retry_on_failure=request.retry_on_failure, + timeout=request.timeout, + tags=request.tags + ) + + # Sauvegarder + schedules = self._load_schedules() + schedules.append(schedule.dict()) + self._save_schedules(schedules) + + # Ajouter le job si actif + if schedule.enabled and self._started: + self._add_job_for_schedule(schedule) + + return schedule + + def update_schedule(self, schedule_id: str, request: ScheduleUpdateRequest) -> Optional[Schedule]: + """Met à jour un schedule existant""" + schedule = self.get_schedule(schedule_id) + if not schedule: + return None + + # Appliquer les modifications + update_data = request.dict(exclude_unset=True, exclude_none=True) + for key, value in update_data.items(): + if hasattr(schedule, key): + setattr(schedule, key, value) + + schedule.updated_at = datetime.now(timezone.utc) + + # Sauvegarder + self._update_schedule_in_storage(schedule) + + # Mettre à jour le job + if self._started: + job_id = f"schedule_{schedule_id}" + try: + self.scheduler.remove_job(job_id) + except: + pass + + if schedule.enabled: + self._add_job_for_schedule(schedule) + + return schedule + + def delete_schedule(self, schedule_id: str) -> bool: + """Supprime un schedule""" + schedules = self._load_schedules() + original_len = len(schedules) + schedules = [s for s in schedules if s['id'] != schedule_id] + + if len(schedules) < original_len: + self._save_schedules(schedules) + + # Supprimer le job + job_id = f"schedule_{schedule_id}" + try: + self.scheduler.remove_job(job_id) + except: + pass + + return True + return False + + def pause_schedule(self, schedule_id: str) -> Optional[Schedule]: + """Met en pause un schedule""" + schedule = self.get_schedule(schedule_id) + if not schedule: + return None + + schedule.enabled = False + self._update_schedule_in_storage(schedule) + + # Supprimer le job + job_id = f"schedule_{schedule_id}" + try: + self.scheduler.remove_job(job_id) + except: + pass + + return schedule + + def resume_schedule(self, schedule_id: str) -> Optional[Schedule]: + """Reprend un schedule en pause""" + schedule = self.get_schedule(schedule_id) + if not schedule: + return None + + schedule.enabled = True + self._update_schedule_in_storage(schedule) + + # Ajouter le job + if self._started: + self._add_job_for_schedule(schedule) + + return schedule + + async def run_now(self, schedule_id: str) -> Optional[ScheduleRun]: + """Exécute immédiatement un schedule""" + schedule = self.get_schedule(schedule_id) + if not schedule: + return None + + # Exécuter de manière asynchrone + await self._execute_schedule(schedule_id) + + # Retourner le dernier run + runs = self._load_runs() + for r in runs: + if r['schedule_id'] == schedule_id: + return ScheduleRun(**r) + return None + + def get_schedule_runs(self, schedule_id: str, limit: int = 50) -> List[ScheduleRun]: + """Récupère l'historique des exécutions d'un schedule""" + runs = self._load_runs() + schedule_runs = [] + + for r in runs: + if r['schedule_id'] == schedule_id: + try: + schedule_runs.append(ScheduleRun(**r)) + except: + continue + + return schedule_runs[:limit] + + def get_stats(self) -> ScheduleStats: + """Calcule les statistiques globales des schedules""" + schedules = self.get_all_schedules() + runs = self._load_runs() + + now = datetime.now(timezone.utc) + yesterday = now - timedelta(days=1) + week_ago = now - timedelta(days=7) + + stats = ScheduleStats() + stats.total = len(schedules) + stats.active = len([s for s in schedules if s.enabled]) + stats.paused = len([s for s in schedules if not s.enabled]) + + # Schedules expirés + stats.expired = len([s for s in schedules if s.end_at and s.end_at < now]) + + # Prochaine exécution + active_schedules = [s for s in schedules if s.enabled and s.next_run_at] + if active_schedules: + next_schedule = min(active_schedules, key=lambda x: x.next_run_at) + stats.next_execution = next_schedule.next_run_at + stats.next_schedule_name = next_schedule.name + + # Stats 24h + runs_24h = [] + for r in runs: + try: + started = datetime.fromisoformat(r['started_at'].replace('Z', '+00:00')) if isinstance(r['started_at'], str) else r['started_at'] + if started >= yesterday: + runs_24h.append(r) + except: + continue + + stats.executions_24h = len(runs_24h) + stats.failures_24h = len([r for r in runs_24h if r.get('status') == 'failed']) + + # Taux de succès 7j + runs_7d = [] + for r in runs: + try: + started = datetime.fromisoformat(r['started_at'].replace('Z', '+00:00')) if isinstance(r['started_at'], str) else r['started_at'] + if started >= week_ago: + runs_7d.append(r) + except: + continue + + if runs_7d: + success_count = len([r for r in runs_7d if r.get('status') == 'success']) + stats.success_rate_7d = round((success_count / len(runs_7d)) * 100, 1) + + return stats + + def get_upcoming_executions(self, limit: int = 5) -> List[Dict]: + """Retourne les prochaines exécutions planifiées""" + schedules = self.get_all_schedules(enabled=True) + upcoming = [] + + for s in schedules: + if s.next_run_at: + upcoming.append({ + "schedule_id": s.id, + "schedule_name": s.name, + "playbook": s.playbook, + "target": s.target, + "next_run_at": s.next_run_at.isoformat() if s.next_run_at else None, + "tags": s.tags + }) + + upcoming.sort(key=lambda x: x['next_run_at'] or '') + return upcoming[:limit] + + def validate_cron_expression(self, expression: str) -> Dict: + """Valide une expression cron et retourne les prochaines exécutions""" + try: + cron = croniter(expression, datetime.now()) + next_runs = [cron.get_next(datetime).isoformat() for _ in range(5)] + return { + "valid": True, + "next_runs": next_runs, + "expression": expression + } + except Exception as e: + return { + "valid": False, + "error": str(e), + "expression": expression + } + + def cleanup_old_runs(self, days: int = 90): + """Nettoie les exécutions plus anciennes que X jours""" + cutoff = datetime.now(timezone.utc) - timedelta(days=days) + runs = self._load_runs() + + new_runs = [] + for r in runs: + try: + started = datetime.fromisoformat(r['started_at'].replace('Z', '+00:00')) if isinstance(r['started_at'], str) else r['started_at'] + if started >= cutoff: + new_runs.append(r) + except: + new_runs.append(r) # Garder si on ne peut pas parser la date + + self._save_runs(new_runs) + return len(runs) - len(new_runs) + + # Instances globales des services task_log_service = TaskLogService(DIR_LOGS_TASKS) adhoc_history_service = AdHocHistoryService(ADHOC_HISTORY_FILE) bootstrap_status_service = BootstrapStatusService(BOOTSTRAP_STATUS_FILE) host_status_service = HostStatusService(HOST_STATUS_FILE) +scheduler_service = SchedulerService(SCHEDULES_FILE, SCHEDULE_RUNS_FILE) class WebSocketManager: @@ -3832,6 +4696,390 @@ async def execute_ansible_task( }) +# ===== ENDPOINTS PLANIFICATEUR (SCHEDULER) ===== + +@app.get("/api/schedules") +async def get_schedules( + enabled: Optional[bool] = None, + playbook: Optional[str] = None, + tag: Optional[str] = None, + api_key_valid: bool = Depends(verify_api_key) +): + """Liste tous les schedules avec filtrage optionnel + + Args: + enabled: Filtrer par statut (true = actifs, false = en pause) + playbook: Filtrer par nom de playbook (recherche partielle) + tag: Filtrer par tag + """ + schedules = scheduler_service.get_all_schedules(enabled=enabled, playbook=playbook, tag=tag) + return { + "schedules": [s.dict() for s in schedules], + "count": len(schedules) + } + + +@app.post("/api/schedules") +async def create_schedule( + request: ScheduleCreateRequest, + api_key_valid: bool = Depends(verify_api_key) +): + """Crée un nouveau schedule + + Exemple de body: + { + "name": "Backup quotidien", + "playbook": "backup-config.yml", + "target": "all", + "schedule_type": "recurring", + "recurrence": { + "type": "daily", + "time": "02:00" + }, + "tags": ["Backup", "Production"] + } + """ + # Vérifier que le playbook existe + playbooks = ansible_service.get_playbooks() + playbook_names = [p['filename'] for p in playbooks] + [p['name'] for p in playbooks] + + # Normaliser le nom du playbook + playbook_file = request.playbook + if not playbook_file.endswith(('.yml', '.yaml')): + playbook_file = f"{playbook_file}.yml" + + if playbook_file not in playbook_names and request.playbook not in playbook_names: + raise HTTPException(status_code=400, detail=f"Playbook '{request.playbook}' non trouvé") + + # Vérifier la cible + if request.target_type == "group": + groups = ansible_service.get_groups() + if request.target not in groups and request.target != "all": + raise HTTPException(status_code=400, detail=f"Groupe '{request.target}' non trouvé") + else: + if not ansible_service.host_exists(request.target): + raise HTTPException(status_code=400, detail=f"Hôte '{request.target}' non trouvé") + + # Valider la récurrence si nécessaire + if request.schedule_type == "recurring" and not request.recurrence: + raise HTTPException(status_code=400, detail="La récurrence est requise pour un schedule récurrent") + + # Valider l'expression cron si custom + if request.recurrence and request.recurrence.type == "custom": + if not request.recurrence.cron_expression: + raise HTTPException(status_code=400, detail="Expression cron requise pour le type 'custom'") + validation = scheduler_service.validate_cron_expression(request.recurrence.cron_expression) + if not validation["valid"]: + raise HTTPException(status_code=400, detail=f"Expression cron invalide: {validation.get('error')}") + + schedule = scheduler_service.create_schedule(request) + + # Log + log_entry = LogEntry( + id=db.get_next_id("logs"), + timestamp=datetime.now(timezone.utc), + level="INFO", + message=f"Schedule '{schedule.name}' créé pour {schedule.playbook} sur {schedule.target}", + source="scheduler" + ) + db.logs.insert(0, log_entry) + + # Notifier via WebSocket + await ws_manager.broadcast({ + "type": "schedule_created", + "data": schedule.dict() + }) + + return { + "success": True, + "message": f"Schedule '{schedule.name}' créé avec succès", + "schedule": schedule.dict() + } + + +@app.get("/api/schedules/stats") +async def get_schedules_stats(api_key_valid: bool = Depends(verify_api_key)): + """Récupère les statistiques globales des schedules""" + stats = scheduler_service.get_stats() + upcoming = scheduler_service.get_upcoming_executions(limit=5) + + return { + "stats": stats.dict(), + "upcoming": upcoming + } + + +@app.get("/api/schedules/upcoming") +async def get_upcoming_schedules( + limit: int = 10, + api_key_valid: bool = Depends(verify_api_key) +): + """Récupère les prochaines exécutions planifiées""" + upcoming = scheduler_service.get_upcoming_executions(limit=limit) + return { + "upcoming": upcoming, + "count": len(upcoming) + } + + +@app.get("/api/schedules/validate-cron") +async def validate_cron_expression( + expression: str, + api_key_valid: bool = Depends(verify_api_key) +): + """Valide une expression cron et retourne les 5 prochaines exécutions""" + result = scheduler_service.validate_cron_expression(expression) + return result + + +@app.get("/api/schedules/{schedule_id}") +async def get_schedule( + schedule_id: str, + api_key_valid: bool = Depends(verify_api_key) +): + """Récupère les détails d'un schedule spécifique""" + schedule = scheduler_service.get_schedule(schedule_id) + if not schedule: + raise HTTPException(status_code=404, detail=f"Schedule '{schedule_id}' non trouvé") + + return schedule.dict() + + +@app.put("/api/schedules/{schedule_id}") +async def update_schedule( + schedule_id: str, + request: ScheduleUpdateRequest, + api_key_valid: bool = Depends(verify_api_key) +): + """Met à jour un schedule existant""" + schedule = scheduler_service.get_schedule(schedule_id) + if not schedule: + raise HTTPException(status_code=404, detail=f"Schedule '{schedule_id}' non trouvé") + + # Valider le playbook si modifié + if request.playbook: + playbooks = ansible_service.get_playbooks() + playbook_names = [p['filename'] for p in playbooks] + [p['name'] for p in playbooks] + playbook_file = request.playbook + if not playbook_file.endswith(('.yml', '.yaml')): + playbook_file = f"{playbook_file}.yml" + if playbook_file not in playbook_names and request.playbook not in playbook_names: + raise HTTPException(status_code=400, detail=f"Playbook '{request.playbook}' non trouvé") + + # Valider l'expression cron si modifiée + if request.recurrence and request.recurrence.type == "custom": + if request.recurrence.cron_expression: + validation = scheduler_service.validate_cron_expression(request.recurrence.cron_expression) + if not validation["valid"]: + raise HTTPException(status_code=400, detail=f"Expression cron invalide: {validation.get('error')}") + + updated = scheduler_service.update_schedule(schedule_id, request) + + # Log + log_entry = LogEntry( + id=db.get_next_id("logs"), + timestamp=datetime.now(timezone.utc), + level="INFO", + message=f"Schedule '{updated.name}' mis à jour", + source="scheduler" + ) + db.logs.insert(0, log_entry) + + # Notifier via WebSocket + await ws_manager.broadcast({ + "type": "schedule_updated", + "data": updated.dict() + }) + + return { + "success": True, + "message": f"Schedule '{updated.name}' mis à jour", + "schedule": updated.dict() + } + + +@app.delete("/api/schedules/{schedule_id}") +async def delete_schedule( + schedule_id: str, + api_key_valid: bool = Depends(verify_api_key) +): + """Supprime un schedule""" + schedule = scheduler_service.get_schedule(schedule_id) + if not schedule: + raise HTTPException(status_code=404, detail=f"Schedule '{schedule_id}' non trouvé") + + schedule_name = schedule.name + success = scheduler_service.delete_schedule(schedule_id) + + if not success: + raise HTTPException(status_code=500, detail="Erreur lors de la suppression") + + # Log + log_entry = LogEntry( + id=db.get_next_id("logs"), + timestamp=datetime.now(timezone.utc), + level="WARN", + message=f"Schedule '{schedule_name}' supprimé", + source="scheduler" + ) + db.logs.insert(0, log_entry) + + # Notifier via WebSocket + await ws_manager.broadcast({ + "type": "schedule_deleted", + "data": {"id": schedule_id, "name": schedule_name} + }) + + return { + "success": True, + "message": f"Schedule '{schedule_name}' supprimé" + } + + +@app.post("/api/schedules/{schedule_id}/run") +async def run_schedule_now( + schedule_id: str, + api_key_valid: bool = Depends(verify_api_key) +): + """Exécute immédiatement un schedule (exécution forcée)""" + schedule = scheduler_service.get_schedule(schedule_id) + if not schedule: + raise HTTPException(status_code=404, detail=f"Schedule '{schedule_id}' non trouvé") + + # Lancer l'exécution en arrière-plan + run = await scheduler_service.run_now(schedule_id) + + return { + "success": True, + "message": f"Schedule '{schedule.name}' lancé", + "run": run.dict() if run else None + } + + +@app.post("/api/schedules/{schedule_id}/pause") +async def pause_schedule( + schedule_id: str, + api_key_valid: bool = Depends(verify_api_key) +): + """Met en pause un schedule""" + schedule = scheduler_service.pause_schedule(schedule_id) + if not schedule: + raise HTTPException(status_code=404, detail=f"Schedule '{schedule_id}' non trouvé") + + # Log + log_entry = LogEntry( + id=db.get_next_id("logs"), + timestamp=datetime.now(timezone.utc), + level="INFO", + message=f"Schedule '{schedule.name}' mis en pause", + source="scheduler" + ) + db.logs.insert(0, log_entry) + + # Notifier via WebSocket + await ws_manager.broadcast({ + "type": "schedule_updated", + "data": schedule.dict() + }) + + return { + "success": True, + "message": f"Schedule '{schedule.name}' mis en pause", + "schedule": schedule.dict() + } + + +@app.post("/api/schedules/{schedule_id}/resume") +async def resume_schedule( + schedule_id: str, + api_key_valid: bool = Depends(verify_api_key) +): + """Reprend un schedule en pause""" + schedule = scheduler_service.resume_schedule(schedule_id) + if not schedule: + raise HTTPException(status_code=404, detail=f"Schedule '{schedule_id}' non trouvé") + + # Log + log_entry = LogEntry( + id=db.get_next_id("logs"), + timestamp=datetime.now(timezone.utc), + level="INFO", + message=f"Schedule '{schedule.name}' repris", + source="scheduler" + ) + db.logs.insert(0, log_entry) + + # Notifier via WebSocket + await ws_manager.broadcast({ + "type": "schedule_updated", + "data": schedule.dict() + }) + + return { + "success": True, + "message": f"Schedule '{schedule.name}' repris", + "schedule": schedule.dict() + } + + +@app.get("/api/schedules/{schedule_id}/runs") +async def get_schedule_runs( + schedule_id: str, + limit: int = 50, + api_key_valid: bool = Depends(verify_api_key) +): + """Récupère l'historique des exécutions d'un schedule""" + schedule = scheduler_service.get_schedule(schedule_id) + if not schedule: + raise HTTPException(status_code=404, detail=f"Schedule '{schedule_id}' non trouvé") + + runs = scheduler_service.get_schedule_runs(schedule_id, limit=limit) + + return { + "schedule_id": schedule_id, + "schedule_name": schedule.name, + "runs": [r.dict() for r in runs], + "count": len(runs) + } + + +# ===== ÉVÉNEMENTS STARTUP/SHUTDOWN ===== + +@app.on_event("startup") +async def startup_event(): + """Événement de démarrage de l'application""" + print("🚀 Homelab Automation Dashboard démarré") + + # Démarrer le scheduler + scheduler_service.start() + + # Log de démarrage + log_entry = LogEntry( + id=db.get_next_id("logs"), + timestamp=datetime.now(timezone.utc), + level="INFO", + message="Application démarrée - Scheduler initialisé", + source="system" + ) + db.logs.insert(0, log_entry) + + # Nettoyer les anciennes exécutions (>90 jours) + cleaned = scheduler_service.cleanup_old_runs(days=90) + if cleaned > 0: + print(f"🧹 {cleaned} anciennes exécutions nettoyées") + + +@app.on_event("shutdown") +async def shutdown_event(): + """Événement d'arrêt de l'application""" + print("👋 Arrêt de l'application...") + + # Arrêter le scheduler + scheduler_service.shutdown() + + print("✅ Scheduler arrêté proprement") + + # Démarrer l'application if __name__ == "__main__": uvicorn.run( diff --git a/app/index.html b/app/index.html index 377cbd2..5ff34a4 100644 --- a/app/index.html +++ b/app/index.html @@ -1273,6 +1273,257 @@ border-color: #d1d5db; color: #111827; } + + /* ===== SCHEDULER PAGE STYLES ===== */ + .schedule-card { + background: rgba(42, 42, 42, 0.4); + border: 1px solid rgba(255, 255, 255, 0.1); + border-radius: 12px; + padding: 16px 20px; + transition: all 0.2s ease; + } + + .schedule-card:hover { + background: rgba(42, 42, 42, 0.7); + border-color: rgba(124, 58, 237, 0.3); + transform: translateX(4px); + } + + .schedule-card.paused { + opacity: 0.7; + border-left: 3px solid #f59e0b; + } + + .schedule-card.active { + border-left: 3px solid #10b981; + } + + .schedule-status-chip { + font-size: 0.7rem; + padding: 2px 8px; + border-radius: 9999px; + font-weight: 500; + } + + .schedule-status-chip.active { + background-color: rgba(16, 185, 129, 0.2); + color: #10b981; + } + + .schedule-status-chip.paused { + background-color: rgba(245, 158, 11, 0.2); + color: #f59e0b; + } + + .schedule-status-chip.running { + background-color: rgba(59, 130, 246, 0.2); + color: #3b82f6; + } + + .schedule-status-chip.success { + background-color: rgba(16, 185, 129, 0.2); + color: #10b981; + } + + .schedule-status-chip.failed { + background-color: rgba(239, 68, 68, 0.2); + color: #ef4444; + } + + .schedule-status-chip.scheduled { + background-color: rgba(124, 58, 237, 0.2); + color: #a78bfa; + } + + .schedule-tag { + font-size: 0.65rem; + padding: 2px 6px; + border-radius: 4px; + background-color: rgba(107, 114, 128, 0.3); + color: #9ca3af; + } + + .schedule-action-btn { + padding: 6px 10px; + border-radius: 6px; + font-size: 0.75rem; + transition: all 0.15s ease; + } + + .schedule-action-btn:hover { + transform: scale(1.05); + } + + .schedule-action-btn.run { + background: rgba(16, 185, 129, 0.2); + color: #10b981; + } + + .schedule-action-btn.run:hover { + background: rgba(16, 185, 129, 0.4); + } + + .schedule-action-btn.pause { + background: rgba(245, 158, 11, 0.2); + color: #f59e0b; + } + + .schedule-action-btn.pause:hover { + background: rgba(245, 158, 11, 0.4); + } + + .schedule-action-btn.edit { + background: rgba(59, 130, 246, 0.2); + color: #60a5fa; + } + + .schedule-action-btn.edit:hover { + background: rgba(59, 130, 246, 0.4); + } + + .schedule-action-btn.delete { + background: rgba(239, 68, 68, 0.2); + color: #f87171; + } + + .schedule-action-btn.delete:hover { + background: rgba(239, 68, 68, 0.4); + } + + .schedule-filter-btn.active { + background-color: #7c3aed !important; + color: white !important; + } + + /* Calendar styles */ + .schedule-calendar-day { + min-height: 80px; + background: rgba(42, 42, 42, 0.3); + border: 1px solid rgba(255, 255, 255, 0.05); + border-radius: 8px; + padding: 8px; + transition: all 0.2s ease; + } + + .schedule-calendar-day:hover { + background: rgba(42, 42, 42, 0.6); + border-color: rgba(124, 58, 237, 0.3); + } + + .schedule-calendar-day.today { + border-color: #7c3aed; + background: rgba(124, 58, 237, 0.1); + } + + .schedule-calendar-day.other-month { + opacity: 0.4; + } + + .schedule-calendar-event { + font-size: 0.65rem; + padding: 2px 4px; + border-radius: 4px; + margin-top: 2px; + cursor: pointer; + white-space: nowrap; + overflow: hidden; + text-overflow: ellipsis; + } + + .schedule-calendar-event.success { + background-color: rgba(16, 185, 129, 0.3); + color: #10b981; + } + + .schedule-calendar-event.failed { + background-color: rgba(239, 68, 68, 0.3); + color: #ef4444; + } + + .schedule-calendar-event.scheduled { + background-color: rgba(59, 130, 246, 0.3); + color: #60a5fa; + } + + /* Modal multi-step */ + .schedule-modal-step { + display: none; + } + + .schedule-modal-step.active { + display: block; + animation: fadeIn 0.3s ease; + } + + .schedule-step-indicator { + display: flex; + justify-content: center; + gap: 8px; + margin-bottom: 24px; + } + + .schedule-step-dot { + width: 32px; + height: 32px; + border-radius: 50%; + display: flex; + align-items: center; + justify-content: center; + font-size: 0.875rem; + font-weight: 600; + background: rgba(107, 114, 128, 0.3); + color: #9ca3af; + transition: all 0.3s ease; + } + + .schedule-step-dot.active { + background: #7c3aed; + color: white; + } + + .schedule-step-dot.completed { + background: #10b981; + color: white; + } + + .schedule-step-connector { + width: 40px; + height: 2px; + background: rgba(107, 114, 128, 0.3); + align-self: center; + } + + .schedule-step-connector.active { + background: #7c3aed; + } + + /* Recurrence preview */ + .recurrence-preview { + background: rgba(124, 58, 237, 0.1); + border: 1px solid rgba(124, 58, 237, 0.3); + border-radius: 8px; + padding: 12px 16px; + font-size: 0.875rem; + } + + /* Light theme overrides */ + body.light-theme .schedule-card { + background: rgba(255, 255, 255, 0.6); + border-color: #d1d5db; + } + + body.light-theme .schedule-card:hover { + background: rgba(255, 255, 255, 0.9); + } + + body.light-theme .schedule-calendar-day { + background: rgba(255, 255, 255, 0.6); + border-color: #e5e7eb; + } + + body.light-theme .schedule-calendar-day:hover { + background: rgba(255, 255, 255, 0.9); + } @@ -1291,6 +1542,7 @@ Hosts Playbooks Tasks + Schedules Logs Aide + + +
+
+

+ Planificateur +

+ Voir tout → +
+ + +
+
+
0
+
Actifs
+
+
+
--
+
Prochaine
+
+
+
0
+
Échecs 24h
+
+
+ + +
+

Chargement...

+
+ + +
@@ -1702,6 +1989,141 @@ + +
+
+
+ +
+

+ Planificateur des Playbooks +

+

Planifiez et orchestrez vos playbooks dans le temps - Exécutions automatiques ponctuelles ou récurrentes

+
+ + +
+
+ + +
+
+ +
+
+ + +
+
+
0
+
Schedules actifs
+
+
+
0
+
En pause
+
+
+
--:--
+
Prochaine exécution
+
+
+
0
+
Échecs 24h
+
+
+ + +
+
+
+ Filtres: + + + +
+
+
+ + +
+
+
+ + +
+
+ +
+ + + +
+ + + + + +
+

+ Prochaines exécutions +

+
+ +
+
+
+
+
+ +
@@ -2385,6 +2807,16 @@ homelab-automation/
+