15 KiB
🚀 PLAN D'EXÉCUTION & MÉTRIQUES
1. RÉSUMÉ EXÉCUTABLE — Ordre d'Attaque des P0 (Semaine 1)
Jour 1: Sécurité Critique
Objectif: Éliminer vulnérabilité XSS
Tasks:
- Installer DOMPurify:
npm install dompurify @types/dompurify
- Remplacer
escapeHtml()
parDOMPurify.sanitize()
dansMarkdownService
- Configurer whitelist:
ALLOWED_TAGS
,ALLOWED_ATTR
- Tests avec payloads XSS (OWASP Top 10)
- Commit + merge
Critère de succès: Payload <img src=x onerror=alert(1)>
neutralisé
Jour 2-3: Performance UI Immédiate
Objectif: Éliminer gels perceptibles
Tasks:
- Implémenter CDK Virtual Scroll pour résultats de recherche (2h)
- Ajouter
trackBy
sur toutes les listes@for
(1h) - Debounce rebuild index (search + graph) avec
debounceTime(300)
(3h) - Tests E2E: search 500 notes <150ms (2h)
Critère de succès: Aucun gel UI >100ms sur actions utilisateur
Jour 4-5: Offload Computation
Objectif: Libérer main thread
Tasks:
- Créer
markdown.worker.ts
avec MarkdownIt (4h) - Implémenter
MarkdownWorkerService
avec pool 2 workers (3h) - Lazy load Mermaid +
runOutsideAngular()
(2h) - Lazy load MathJax (1h)
- Tests rendering note 1000 lignes + mermaid (2h)
Critère de succès: Parsing note complexe: main thread <16ms
Jour 6-7: Backend Meilisearch MVP
Objectif: Recherche scalable
Tasks:
- Docker Compose: ajouter service Meilisearch (1h)
- Backend: script indexation
meilisearch-indexer.mjs
(3h) - Créer
SearchMeilisearchService
Angular (2h) - Mapper opérateurs Obsidian → filtres (3h)
- Route
/api/search
avec parsing opérateurs (3h) - Tests: opérateurs
tag:
,path:
,file:
(2h)
Critère de succès: Search retourne <150ms P95 sur 1000 notes
Jour 8: Observabilité
Objectif: Diagnostics production
Tasks:
- Créer route POST
/api/log
(2h) - Implémenter validation + sanitization logs (2h)
- Rotation logs automatique (10MB max) (1h)
- Tests: batch 50 événements <50ms (1h)
Critère de succès: Logs persistés avec corrélation sessionId
2. PLAN D'IMPLÉMENTATION PAR ÉTAPES
Phase 1: CRITIQUE (Semaine 1-2) — 8 jours
Focus: Sécurité + Performance bloquante
Item | Effort | Dépendances | Risque |
---|---|---|---|
DOMPurify sanitization | 1j | Aucune | Faible |
CDK Virtual Scroll | 2j | Aucune | Faible |
Debounce index rebuild | 3j | Aucune | Moyen |
Markdown Web Worker | 4j | Aucune | Moyen |
Lazy load Mermaid/MathJax | 2j | Aucune | Faible |
Meilisearch integration | 5j | Docker setup | Élevé |
/api/log backend | 3j | Aucune | Faible |
Livrable: Version 1.1.0 — "Performance & Security"
Métriques cibles: TTI <2.5s, Search P95 <150ms, 0 vulnérabilités XSS
Phase 2: OPTIMISATION (Semaine 3-4) — 7 jours
Focus: Caching + Infra
Item | Effort | Dépendances | Risque |
---|---|---|---|
Service Worker + Workbox | 3j | Aucune | Moyen |
Budgets Lighthouse | 0.5j | Aucune | Faible |
Dockerfile multi-stage | 2j | Aucune | Faible |
Variables d'env (12-factor) | 1.5j | Aucune | Faible |
CSP headers + NGINX | 1.5j | Docker | Faible |
Throttle RAF canvas | 1j | Aucune | Faible |
Tests E2E étendus | 2.5j | Playwright | Moyen |
Livrable: Version 1.2.0 — "Infrastructure"
Métriques cibles: Offline support, Image <150MB, A+ Mozilla Observatory
Phase 3: NICE-TO-HAVE (Semaine 5+) — 5 jours
Focus: Code splitting + Optimisations avancées
Item | Effort | Dépendances | Risque |
---|---|---|---|
Lazy routes Angular | 3j | Routing refactor | Moyen |
GraphData memoization | 1.5j | Aucune | Faible |
markdown-it-attrs whitelist | 0.5j | Aucune | Faible |
Progressive rendering | 2j | Aucune | Moyen |
IndexedDB cache | 3j | Dexie.js | Moyen |
OpenTelemetry (opt.) | 4j | Infra monitoring | Élevé |
Livrable: Version 1.3.0 — "Polish"
Métriques cibles: Initial bundle <800KB, Cache hit rate >80%
3. MÉTRIQUES À SUIVRE (Performance & Erreurs)
A) Métriques Performance (Lighthouse + Custom)
Métrique | Actuel (estimé) | Cible Phase 1 | Cible Phase 2 | Cible Phase 3 | Seuil Alerte |
---|---|---|---|---|---|
TTI (Time to Interactive) | 4.2s | 2.5s | 2.0s | 1.5s | >3s (P95) |
LCP (Largest Contentful Paint) | 2.8s | 2.0s | 1.5s | 1.2s | >2.5s (P75) |
FID (First Input Delay) | 120ms | 80ms | 50ms | 30ms | >100ms (P95) |
CLS (Cumulative Layout Shift) | 0.15 | 0.1 | 0.05 | 0.02 | >0.1 (P75) |
Bundle Size (initial) | 2.8MB | 1.8MB | 1.5MB | 800KB | >2MB |
Bundle Size (lazy chunks) | N/A | 500KB | 300KB | 200KB | >500KB |
Search P95 Latency | 800ms | 150ms | 100ms | 50ms | >200ms |
Graph Interaction P95 | 1500ms | 500ms | 100ms | 50ms | >300ms |
Markdown Parse P95 | 500ms | 100ms | 50ms | 16ms | >150ms |
Memory Heap (steady state) | 120MB | 100MB | 80MB | 60MB | >150MB |
Outils de mesure:
- Lighthouse CI (automatisé dans pipeline)
- Chrome DevTools Performance profiler
performance.mark()
+performance.measure()
custom- Real User Monitoring (RUM) via
/api/log
PERFORMANCE_METRIC events
B) Métriques Erreurs & Stabilité
Métrique | Cible | Seuil Alerte | Action |
---|---|---|---|
Error Rate | <0.1% sessions | >1% | Rollback deploy |
XSS Vulnerabilities | 0 | >0 | Blocage release |
Search Error Rate | <0.5% queries | >2% | Investigate index corruption |
Graph Freeze Rate | <0.1% interactions | >1% | Degrade to simple view |
Worker Crash Rate | <0.01% | >0.5% | Fallback to sync mode |
API /log Uptime | >99.5% | <95% | Scale backend |
CSP Violations | <10/day | >100/day | Review inline scripts |
Alertes configurées via:
- Sentry (erreurs runtime)
- LogRocket (session replay on error)
- Custom
/api/log
aggregation + Grafana
C) Métriques Business (UX)
Métrique | Actuel | Cible | Mesure |
---|---|---|---|
Searches per Session | 2.3 | 4.0 | Via SEARCH_EXECUTED events |
Graph View Engagement | 15% users | 40% | Via GRAPH_VIEW_OPEN events |
Bookmark Usage | 8% users | 25% | Via BOOKMARKS_MODIFY events |
Session Duration | 3.2min | 8min | Via APP_START → APP_STOP |
Bounce Rate (no interaction) | 35% | <20% | First event within 30s |
4. COMMANDES À EXÉCUTER (Vérification & Bench)
Performance Benchmark
Lighthouse CI (automatisé):
# Install
npm install -g @lhci/cli
# Run Lighthouse on dev server
ng serve &
sleep 5
lhci autorun --config=.lighthouserc.json
# Expected output:
# ✅ TTI: <2.5s
# ✅ FCP: <1.5s
# ✅ Performance Score: >85
.lighthouserc.json
:
{
"ci": {
"collect": {
"url": ["http://localhost:3000"],
"numberOfRuns": 3
},
"assert": {
"assertions": {
"categories:performance": ["error", {"minScore": 0.85}],
"first-contentful-paint": ["error", {"maxNumericValue": 1500}],
"interactive": ["error", {"maxNumericValue": 2500}],
"cumulative-layout-shift": ["error", {"maxNumericValue": 0.1}]
}
},
"upload": {
"target": "temporary-public-storage"
}
}
}
Bundle Analysis
# Build with stats
ng build --configuration=production --stats-json
# Analyze with webpack-bundle-analyzer
npx webpack-bundle-analyzer dist/stats.json
# Expected:
# ✅ Initial bundle: <1.5MB
# ✅ Vendor chunk: <800KB
# ✅ Lazy chunks: <300KB each
Search Performance Test
Script: scripts/bench-search.ts
import { performance } from 'perf_hooks';
async function benchSearch() {
const queries = [
'tag:#project',
'path:folder1/ important',
'file:home -tag:#archive',
'has:attachment task:TODO'
];
const results = [];
for (const query of queries) {
const start = performance.now();
const response = await fetch('http://localhost:4000/api/search', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query, vaultId: 'primary' })
});
const data = await response.json();
const duration = performance.now() - start;
results.push({
query,
duration,
hits: data.estimatedTotalHits,
serverTime: data.processingTimeMs
});
}
console.table(results);
const p95 = results.sort((a, b) => b.duration - a.duration)[Math.floor(results.length * 0.95)].duration;
console.log(`\n✅ Search P95: ${p95.toFixed(2)}ms (target: <150ms)`);
}
benchSearch();
Exécution:
npx ts-node scripts/bench-search.ts
# Expected output:
# ┌─────────┬──────────────────────────────┬──────────┬──────┬────────────┐
# │ (index) │ query │ duration │ hits │ serverTime │
# ├─────────┼──────────────────────────────┼──────────┼──────┼────────────┤
# │ 0 │ 'tag:#project' │ 48.2 │ 23 │ 12.5 │
# │ 1 │ 'path:folder1/ important' │ 52.7 │ 8 │ 15.8 │
# │ 2 │ 'file:home -tag:#archive' │ 45.3 │ 1 │ 10.2 │
# │ 3 │ 'has:attachment task:TODO' │ 61.5 │ 5 │ 18.9 │
# └─────────┴──────────────────────────────┴──────────┴──────┴────────────┘
# ✅ Search P95: 61.5ms (target: <150ms)
E2E Tests Performance
Playwright config additions:
// playwright.config.ts
export default defineConfig({
use: {
trace: 'retain-on-failure',
video: 'on-first-retry',
},
reporter: [
['html'],
['json', { outputFile: 'test-results/results.json' }]
],
timeout: 30000,
expect: {
timeout: 5000
}
});
Run E2E with performance assertions:
npx playwright test --reporter=html
# Expected:
# ✅ search-performance.spec.ts (4/4 passed)
# - Search 500 notes completes in <150ms
# - No main thread freeze >100ms
# - UI remains interactive during search
# - Virtual scroll renders without CLS
Docker Image Size Verification
# Build optimized image
docker build -f docker/Dockerfile -t obsiviewer:optimized .
# Check size
docker images obsiviewer:optimized
# Expected:
# REPOSITORY TAG SIZE
# obsiviewer optimized 145MB (vs 450MB before)
# Verify healthcheck
docker run -d -p 4000:4000 --name test obsiviewer:optimized
sleep 10
docker inspect --format='{{.State.Health.Status}}' test
# Expected: healthy
Security Scan
# XSS payload tests
npm run test:e2e -- e2e/security-xss.spec.ts
# CSP violations check
curl -I http://localhost:4000 | grep -i "content-security-policy"
# Expected:
# content-security-policy: default-src 'self'; script-src 'self' 'unsafe-eval'; ...
# npm audit
npm audit --production
# Expected:
# found 0 vulnerabilities
Meilisearch Index Stats
# Check index health
curl http://localhost:7700/indexes/vault_primary/stats \
-H "Authorization: Bearer masterKey"
# Expected response:
{
"numberOfDocuments": 823,
"isIndexing": false,
"fieldDistribution": {
"title": 823,
"content": 823,
"tags": 645,
"path": 823
}
}
# Test search latency
curl -X POST http://localhost:7700/indexes/vault_primary/search \
-H "Authorization: Bearer masterKey" \
-H "Content-Type: application/json" \
-d '{"q":"project","limit":50}' \
-w "\nTime: %{time_total}s\n"
# Expected: Time: 0.035s (<50ms)
5. DASHBOARD MÉTRIQUES (Grafana/Custom)
Panels recommandés:
-
Search Performance
- P50/P95/P99 latency (line chart)
- Error rate (gauge)
- Queries per minute (counter)
-
Graph Interactions
- Freeze events count (bar chart)
- Node click → selection latency (histogram)
- Viewport FPS (line chart)
-
Frontend Vitals
- LCP, FID, CLS (timeseries)
- Bundle size evolution (area chart)
- Memory heap (line chart)
-
Backend Health
- /api/vault response time (line chart)
- Meilisearch indexing status (state timeline)
- Log ingestion rate (counter)
-
User Engagement
- Active sessions (gauge)
- Feature adoption (pie chart: search/graph/bookmarks/calendar)
- Session duration distribution (histogram)
Exemple config Prometheus + Grafana:
# docker-compose.yml additions
services:
prometheus:
image: prom/prometheus
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
grafana:
image: grafana/grafana
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- ./monitoring/grafana-dashboards:/var/lib/grafana/dashboards
6. CRITÈRES DE SUCCÈS GLOBAUX
Phase 1 (Semaine 1-2) ✅
- Lighthouse Performance Score: >85
- Search P95: <150ms (1000 notes)
- TTI: <2.5s
- Aucune vulnérabilité XSS détectée
- Main thread freeze: <100ms sur toutes interactions
/api/log
opérationnel avec rotation
Phase 2 (Semaine 3-4) ✅
- Lighthouse Performance Score: >90
- Image Docker: <150MB
- Offline support: app charge depuis cache
- CSP headers configurés, score Mozilla Observatory: A+
- Tests E2E coverage: >60%
- Bundle budgets respectés (no warnings)
Phase 3 (Semaine 5+) ✅
- Initial bundle: <800KB
- Search P95: <50ms
- Graph interaction P95: <50ms
- Cache hit rate: >80%
- Memory steady state: <60MB
7. COMMANDES QUOTIDIENNES (CI/CD)
Pre-commit:
npm run lint
npm run test:unit
Pre-push:
npm run build
npm run test:e2e
CI Pipeline (GitHub Actions exemple):
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: '20'
- run: npm ci
- run: npm run lint
- run: npm run test:unit
- run: npm run build
- run: npx lhci autorun
- run: npm run test:e2e
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: npm audit --production
- run: npm run test:e2e -- e2e/security-xss.spec.ts
docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: docker build -t obsiviewer:${{ github.sha }} .
- run: |
SIZE=$(docker images obsiviewer:${{ github.sha }} --format "{{.Size}}")
echo "Image size: $SIZE"
# Fail if >200MB
FIN DU PLAN D'EXÉCUTION
Toutes les métriques, commandes et critères sont prêts à être appliqués immédiatement.