# Phase 5: Event Tracking, Business Dashboards, Per-User Stats -- Implementation Plan > **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. **Goal:** Add event tracking (signup/login/bouncer/billing events), Stripe-based MRR dashboards, fix active users metric, and show per-user usage stats in the portal dashboard. **Architecture:** Append-only `events` table in accounts-db for event recording. Stripe API queries on a timer for revenue metrics. Internal `/api/usage/{sub}` endpoints on txt/pics for cross-service stats. Dashboard page expanded with bouncer + content stats. All metrics exposed as Prometheus gauges for Grafana. **Tech Stack:** Rust (Axum 0.8, sqlx, tokio-postgres, async-stripe 0.41, kube 3, serde_json), PostgreSQL, Prometheus, Grafana --- ### Task 1: Events table migration **Files:** - Create: `sql/004_create_events.sql` **Step 1: Write the migration** ```sql CREATE TABLE events ( id BIGSERIAL PRIMARY KEY, user_sub TEXT NOT NULL REFERENCES users(keycloak_sub), event_type TEXT NOT NULL, metadata JSONB, created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() ); CREATE INDEX idx_events_type_created ON events (event_type, created_at); CREATE INDEX idx_events_user ON events (user_sub, created_at); ``` **Step 2: Commit** ```bash git add sql/004_create_events.sql git commit -m "feat: add events table migration (004)" ``` --- ### Task 2: Event recording helper in web-api **Files:** - Create: `crates/web-api/src/events.rs` - Modify: `crates/web-api/src/main.rs:1` (add `mod events;`) **Step 1: Write the events module** ```rust use sqlx::PgPool; pub async fn record(pool: &PgPool, user_sub: &str, event_type: &str, metadata: Option) { if let Err(e) = sqlx::query( "INSERT INTO events (user_sub, event_type, metadata) VALUES ($1, $2, $3)", ) .bind(user_sub) .bind(event_type) .bind(metadata) .execute(pool) .await { tracing::warn!("failed to record event {event_type}: {e}"); } } ``` **Step 2: Add `mod events;` to main.rs** In `crates/web-api/src/main.rs`, add `mod events;` after the existing module declarations (after line 9, `mod stripe_util;`). **Step 3: Verify it compiles** Run: `cargo check -p irc-now-web-api` Expected: compiles (events module is defined but not yet called) **Step 4: Commit** ```bash git add crates/web-api/src/events.rs crates/web-api/src/main.rs git commit -m "feat: add event recording helper" ``` --- ### Task 3: Record signup and login events in auth callback **Files:** - Modify: `crates/web-api/src/routes/auth.rs:95-202` (callback function) **Step 1: Detect signup vs login** The upsert query at line 149 returns the user row. A signup is when the user was just created. Change the upsert to also return `created_at` and compare it to determine if this is a new user. Replace the query at lines 149-162 with: ```rust let row = sqlx::query_as::<_, UserRow>( "INSERT INTO users (keycloak_sub, email) VALUES ($1, $2) ON CONFLICT (keycloak_sub) DO UPDATE SET email = EXCLUDED.email RETURNING plan, stripe_customer_id, display_name, content_expires, created_at", ) .bind(sub) .bind(email) .fetch_one(&state.db) .await .map_err(|e| { tracing::error!("user upsert failed: {e}"); (AppendHeaders([flash::set_flash("login_failed")]), Redirect::temporary("/auth/error")) })?; ``` Add `created_at: chrono::DateTime` to the `UserRow` struct (line 42-48). Add `use chrono` import -- check if chrono is already a dep, if not add `chrono = { version = "0.4", features = ["serde"] }` to `crates/web-api/Cargo.toml`. **Step 2: Record events after upsert** After the upsert (after line 162, before building `claims`), add: ```rust let is_signup = (chrono::Utc::now() - row.created_at).num_seconds() < 5; if is_signup { crate::events::record(&state.db, sub, "signup", None).await; } crate::events::record(&state.db, sub, "login", None).await; ``` **Step 3: Update `last_login_at` in the upsert** Change the upsert SQL to also set `last_login_at`: ```sql INSERT INTO users (keycloak_sub, email) VALUES ($1, $2) ON CONFLICT (keycloak_sub) DO UPDATE SET email = EXCLUDED.email, last_login_at = NOW() RETURNING plan, stripe_customer_id, display_name, content_expires, created_at ``` This requires `last_login_at` column to exist. That migration was applied previously (migration `001_add_last_login_at.sql` in the worktree). If it hasn't been merged to main yet, create `sql/005_add_last_login_at.sql`: ```sql ALTER TABLE users ADD COLUMN IF NOT EXISTS last_login_at TIMESTAMPTZ; CREATE INDEX IF NOT EXISTS idx_users_last_login_at ON users (last_login_at) WHERE last_login_at IS NOT NULL; ``` **Step 4: Verify it compiles** Run: `cargo check -p irc-now-web-api` **Step 5: Commit** ```bash git add crates/web-api/src/routes/auth.rs sql/ git commit -m "feat: record signup and login events in auth callback" ``` --- ### Task 4: Record bouncer create/delete events **Files:** - Modify: `crates/web-api/src/routes/bouncer.rs:78-123` (create function) - Modify: `crates/web-api/src/routes/bouncer.rs:241-269` (delete function) **Step 1: Record bouncer_create event after successful creation** In the `create` function, after the `api.create()` call succeeds (line 116), before `add_default_upstream`, the `user.sub` has already been moved into the bouncer labels (line 101). We need to capture it before that. Clone `user.sub` before line 94: ```rust let user_sub = user.sub.clone(); ``` Then after line 120 (`add_default_upstream` call), add: ```rust crate::events::record( &state.db, &user_sub, "bouncer_create", Some(serde_json::json!({"bouncer": name})), ).await; ``` **Step 2: Record bouncer_delete event after successful deletion** In the `delete` function, after the successful `api.delete()` at line 263, add event recording. The `user.sub` is available from the `AuthUser` extractor. After line 263 (the `Ok(_)` arm of the delete match), change it to: ```rust Ok(_) => { crate::events::record( &state.db, &user.sub, "bouncer_delete", Some(serde_json::json!({"bouncer": name})), ).await; (AppendHeaders([flash::set_flash("bouncer_deleted")]), Redirect::to("/bouncers")) } ``` **Step 3: Verify it compiles** Run: `cargo check -p irc-now-web-api` **Step 4: Commit** ```bash git add crates/web-api/src/routes/bouncer.rs git commit -m "feat: record bouncer create/delete events" ``` --- ### Task 5: Record plan upgrade/downgrade events in billing webhook **Files:** - Modify: `crates/web-api/src/routes/billing.rs:107-181` (webhook function) **Step 1: Record plan_upgrade event** In the `checkout.session.completed` handler, after the successful DB update (line 148, `Ok(_)` arm), add: ```rust crate::events::record( &state.db, sub, "plan_upgrade", Some(serde_json::json!({"from": "free", "to": "pro"})), ).await; ``` **Step 2: Record plan_downgrade event** In the `customer.subscription.deleted` handler, after the successful downgrade (line 170, inside `if let Some(sub)`), add: ```rust crate::events::record( &state.db, &sub, "plan_downgrade", Some(serde_json::json!({"from": "pro", "to": "free"})), ).await; ``` **Step 3: Verify it compiles** Run: `cargo check -p irc-now-web-api` **Step 4: Commit** ```bash git add crates/web-api/src/routes/billing.rs git commit -m "feat: record plan upgrade/downgrade events" ``` --- ### Task 6: Stripe MRR metrics background task **Files:** - Create: `crates/web-api/src/business_metrics.rs` - Modify: `crates/web-api/src/main.rs` (add `mod business_metrics;`, spawn tasks) - Modify: `crates/web-api/Cargo.toml` (add `metrics`, `metrics-exporter-prometheus`) **Step 1: Add prometheus metrics dependencies** Add to `crates/web-api/Cargo.toml`: ```toml metrics = "0.24" metrics-exporter-prometheus = { version = "0.16", default-features = false } ``` Check workspace Cargo.toml for existing versions of these -- the bot-service worktree already uses them. Match existing versions. **Step 2: Write the business_metrics module** ```rust use metrics::{counter, gauge}; use sqlx::PgPool; pub async fn record_stripe_metrics(stripe_client: stripe::Client) { let mut interval = tokio::time::interval(std::time::Duration::from_secs(300)); loop { interval.tick().await; let active_params = stripe::ListSubscriptions { status: Some(stripe::SubscriptionStatusFilter::Active), limit: Some(100), ..Default::default() }; match stripe::Subscription::list(&stripe_client, &active_params).await { Ok(list) => { let mrr_cents: i64 = list.data.iter() .filter_map(|s| s.items.data.first()) .filter_map(|item| item.price.as_ref()) .filter_map(|price| price.unit_amount) .sum(); gauge!("irc_now_mrr_cents").set(mrr_cents as f64); gauge!("irc_now_subscriptions_active").set(list.data.len() as f64); } Err(e) => tracing::warn!("stripe subscription list failed: {e}"), } } } pub async fn record_event_metrics(db: PgPool) { let mut interval = tokio::time::interval(std::time::Duration::from_secs(60)); loop { interval.tick().await; // Signups in last 7 days if let Ok(count) = sqlx::query_scalar::<_, i64>( "SELECT COUNT(*) FROM events WHERE event_type = 'signup' AND created_at > NOW() - INTERVAL '7 days'", ).fetch_one(&db).await { gauge!("irc_now_signups_7d").set(count as f64); } // Conversion rate: upgrades / signups over 30 days let upgrades: i64 = sqlx::query_scalar( "SELECT COUNT(*) FROM events WHERE event_type = 'plan_upgrade' AND created_at > NOW() - INTERVAL '30 days'", ).fetch_one(&db).await.unwrap_or(0); let signups: i64 = sqlx::query_scalar( "SELECT COUNT(*) FROM events WHERE event_type = 'signup' AND created_at > NOW() - INTERVAL '30 days'", ).fetch_one(&db).await.unwrap_or(0); if signups > 0 { gauge!("irc_now_conversion_rate_30d").set(upgrades as f64 / signups as f64); } // Churn rate: downgrades / pro users over 30 days let downgrades: i64 = sqlx::query_scalar( "SELECT COUNT(*) FROM events WHERE event_type = 'plan_downgrade' AND created_at > NOW() - INTERVAL '30 days'", ).fetch_one(&db).await.unwrap_or(0); let pro_users: i64 = sqlx::query_scalar( "SELECT COUNT(*) FROM users WHERE plan = 'pro'", ).fetch_one(&db).await.unwrap_or(0); if pro_users > 0 { gauge!("irc_now_churn_rate_30d").set(downgrades as f64 / pro_users as f64); } // Users total + by plan if let Ok(total) = sqlx::query_scalar::<_, i64>( "SELECT COUNT(*) FROM users", ).fetch_one(&db).await { gauge!("irc_now_users_total").set(total as f64); } let plan_rows: Vec<(String, i64)> = sqlx::query_as( "SELECT plan, COUNT(*) FROM users GROUP BY plan", ).fetch_all(&db).await.unwrap_or_default(); for (plan, count) in plan_rows { gauge!("irc_now_users_by_plan", "plan" => plan).set(count as f64); } // Active users (bouncer interaction) -- done in record_soju_metrics } } pub async fn record_soju_metrics(kube: kube::Client, namespace: String) { use k8s_openapi::api::core::v1::Secret; use kube::{Api, api::ListParams}; use crate::k8s::SojuBouncer; let mut interval = tokio::time::interval(std::time::Duration::from_secs(300)); loop { interval.tick().await; let bouncer_api: Api = Api::namespaced(kube.clone(), &namespace); let secret_api: Api = Api::namespaced(kube.clone(), &namespace); let bouncers = match bouncer_api.list(&ListParams::default()).await { Ok(list) => list.items, Err(e) => { tracing::warn!("failed to list bouncers for metrics: {e}"); continue; } }; let mut total_networks: i64 = 0; let mut total_channels: i64 = 0; let mut total_messages: i64 = 0; let mut active_users: i64 = 0; for bouncer in &bouncers { let name = match &bouncer.metadata.name { Some(n) => n, None => continue, }; let uri = match secret_api.get(&format!("{name}-db")).await { Ok(secret) => { secret.data.as_ref() .and_then(|d| d.get("uri")) .map(|b| String::from_utf8_lossy(&b.0).to_string()) } Err(_) => continue, }; let Some(uri) = uri else { continue }; let Ok((client, connection)) = tokio_postgres::connect(&uri, tokio_postgres::NoTls).await else { continue; }; tokio::spawn(async move { let _ = connection.await; }); if let Ok(row) = client.query_one(r#"SELECT COUNT(*) FROM "Network""#, &[]).await { total_networks += row.get::<_, i64>(0); } if let Ok(row) = client.query_one(r#"SELECT COUNT(*) FROM "Channel""#, &[]).await { total_channels += row.get::<_, i64>(0); } if let Ok(row) = client.query_one(r#"SELECT COUNT(*) FROM "MessageTarget""#, &[]).await { total_messages += row.get::<_, i64>(0); } if let Ok(row) = client.query_one( r#"SELECT COUNT(*) FROM "User" WHERE downstream_interacted_at > NOW() - INTERVAL '24 hours'"#, &[], ).await { active_users += row.get::<_, i64>(0); } } gauge!("irc_now_bouncer_networks_total").set(total_networks as f64); gauge!("irc_now_bouncer_channels_total").set(total_channels as f64); gauge!("irc_now_bouncer_messages_total").set(total_messages as f64); gauge!("irc_now_bouncer_active_users").set(active_users as f64); gauge!("irc_now_active_users_24h").set(active_users as f64); } } pub fn login_counter() { counter!("irc_now_logins_total").increment(1); } ``` Note: The `"MessageTarget"` table name may differ -- check soju schema. The worktree used `"Message"` but soju's actual table for message history might be `"MessageTarget"` or `"WebPushSubscription"`. Verify by connecting to a bouncer DB and listing tables. Fall back to `"Message"` if uncertain. **Step 3: Set up Prometheus metrics handler and spawn tasks in main.rs** Add `mod business_metrics;` to main.rs module declarations. Add the Prometheus recorder setup before the router, and the `/metrics` endpoint. Add background task spawning after session store setup. After line 9 (`mod stripe_util;`), add: ```rust mod business_metrics; ``` Before the router (after line 91, the session_layer), add: ```rust use std::sync::OnceLock; use metrics_exporter_prometheus::PrometheusBuilder; static PROM_HANDLE: OnceLock = OnceLock::new(); let handle = PrometheusBuilder::new().install_recorder().expect("prometheus recorder install failed"); PROM_HANDLE.set(handle).expect("prometheus handle already set"); ``` Add a metrics handler function (before the `main` function): ```rust async fn metrics() -> String { use std::sync::OnceLock; // Access the static handle from main static PROM_HANDLE: OnceLock = OnceLock::new(); PROM_HANDLE.get().map(|h| h.render()).unwrap_or_default() } ``` Actually, the OnceLock pattern needs the handle shared. Better approach -- use a single `static`: At the top of main.rs (after imports): ```rust use std::sync::OnceLock; static PROM_HANDLE: OnceLock = OnceLock::new(); async fn metrics_handler() -> String { PROM_HANDLE.get().map(|h| h.render()).unwrap_or_default() } ``` In main(), before building the router: ```rust let handle = metrics_exporter_prometheus::PrometheusBuilder::new() .install_recorder() .expect("prometheus recorder install failed"); PROM_HANDLE.set(handle).expect("prometheus handle already set"); ``` Add the route to the router: ```rust .route("/metrics", get(metrics_handler)) ``` Spawn the background tasks (after `_deletion_task`): ```rust tokio::spawn(business_metrics::record_event_metrics(state.db.clone())); tokio::spawn(business_metrics::record_soju_metrics(state.kube.clone(), state.namespace.clone())); tokio::spawn(business_metrics::record_stripe_metrics(state.stripe_client.clone())); ``` **Step 4: Add login counter call in auth callback** In `crates/web-api/src/routes/auth.rs`, after recording the login event, add: ```rust crate::business_metrics::login_counter(); ``` **Step 5: Verify it compiles** Run: `cargo check -p irc-now-web-api` Fix any issues with Stripe API types -- `stripe::ListSubscriptions` might need different field names. Check async-stripe 0.41 docs. The key fields are: - `status` filter for active subscriptions - Iteration over `items.data` for pricing **Step 6: Commit** ```bash git add crates/web-api/src/business_metrics.rs crates/web-api/src/main.rs crates/web-api/Cargo.toml git commit -m "feat: add business metrics -- Stripe MRR, event-derived gauges, soju stats" ``` --- ### Task 7: Internal usage API on txt service **Files:** - Create: `crates/txt/src/routes/usage.rs` - Modify: `crates/txt/src/routes/mod.rs` (add `pub mod usage;`) - Modify: `crates/txt/src/main.rs` (add route) **Step 1: Write the usage endpoint** ```rust use axum::extract::{Path, State}; use axum::Json; use serde::Serialize; use crate::state::AppState; #[derive(Serialize)] pub struct UsageResponse { pub paste_count: i64, pub storage_bytes: i64, } pub async fn user_usage( State(state): State, Path(sub): Path, ) -> Json { let (paste_count, storage_bytes) = sqlx::query_as::<_, (i64, i64)>( "SELECT COUNT(*), COALESCE(SUM(size_bytes), 0) FROM pastes WHERE user_id = $1", ) .bind(&sub) .fetch_one(&state.db) .await .unwrap_or((0, 0)); Json(UsageResponse { paste_count, storage_bytes }) } ``` **Step 2: Register the route** In `crates/txt/src/routes/mod.rs`, add `pub mod usage;`. In `crates/txt/src/main.rs`, add the route (before the `/{id}` route to avoid it being caught as a paste ID): ```rust .route("/api/usage/{sub}", get(routes::usage::user_usage)) ``` Add this BEFORE the `"/{id}"` route (line 76) since Axum matches routes in registration order for parameterized routes. **Step 3: Add serde dependency if needed** Check `crates/txt/Cargo.toml` for `serde` -- it should already be there for the existing routes. `Json` comes from `axum` and `Serialize` from `serde`. **Step 4: Verify it compiles** Run: `cargo check -p irc-now-txt` **Step 5: Commit** ```bash git add crates/txt/src/routes/usage.rs crates/txt/src/routes/mod.rs crates/txt/src/main.rs git commit -m "feat(txt): add internal usage API endpoint" ``` --- ### Task 8: Internal usage API on pics service **Files:** - Create: `crates/pics/src/routes/usage.rs` - Modify: `crates/pics/src/routes/mod.rs` (add `pub mod usage;`) - Modify: `crates/pics/src/main.rs` (add route) **Step 1: Write the usage endpoint** ```rust use axum::extract::{Path, State}; use axum::Json; use serde::Serialize; use crate::state::AppState; #[derive(Serialize)] pub struct UsageResponse { pub image_count: i64, pub storage_bytes: i64, } pub async fn user_usage( State(state): State, Path(sub): Path, ) -> Json { let (image_count, storage_bytes) = sqlx::query_as::<_, (i64, i64)>( "SELECT COUNT(*), COALESCE(SUM(size_bytes), 0) FROM images WHERE user_id = $1", ) .bind(&sub) .fetch_one(&state.db) .await .unwrap_or((0, 0)); Json(UsageResponse { image_count, storage_bytes }) } ``` **Step 2: Register the route** In `crates/pics/src/routes/mod.rs`, add `pub mod usage;`. In `crates/pics/src/main.rs`, add the route before `/{id}`: ```rust .route("/api/usage/{sub}", get(routes::usage::user_usage)) ``` Place this BEFORE the `"/{id}"` route (line 108). **Step 3: Verify it compiles** Run: `cargo check -p irc-now-pics` **Step 4: Commit** ```bash git add crates/pics/src/routes/usage.rs crates/pics/src/routes/mod.rs crates/pics/src/main.rs git commit -m "feat(pics): add internal usage API endpoint" ``` --- ### Task 9: Add internal service URLs to web-api state **Files:** - Modify: `crates/web-api/src/state.rs:1-19` (add txt_url, pics_url) - Modify: `crates/web-api/src/main.rs` (read env vars, pass to state) **Step 1: Add fields to AppState** In `crates/web-api/src/state.rs`, add to the struct: ```rust pub txt_url: String, pub pics_url: String, ``` **Step 2: Read env vars in main.rs** In `crates/web-api/src/main.rs`, before the AppState construction (before line 67), add: ```rust let txt_url = std::env::var("TXT_INTERNAL_URL").unwrap_or_else(|_| "http://txt:8080".to_string()); let pics_url = std::env::var("PICS_INTERNAL_URL").unwrap_or_else(|_| "http://pics:8080".to_string()); ``` Add them to the AppState construction: ```rust let state = AppState { db, oidc, oidc_client: Arc::new(oidc_client), stripe_webhook_secret, stripe_client, stripe_price_id, kube: kube_client, namespace, http_client, announcement, txt_url, pics_url, }; ``` **Step 3: Verify it compiles** Run: `cargo check -p irc-now-web-api` **Step 4: Commit** ```bash git add crates/web-api/src/state.rs crates/web-api/src/main.rs git commit -m "feat: add txt/pics internal service URLs to AppState" ``` --- ### Task 10: Expand dashboard with per-user stats **Files:** - Modify: `crates/web-api/src/routes/dashboard.rs` (expand template struct and handler) - Modify: `crates/web-api/templates/dashboard.html` (add stats cards) **Step 1: Add usage fetch structs** At the top of `crates/web-api/src/routes/dashboard.rs`, add: ```rust use serde::Deserialize; use k8s_openapi::api::core::v1::Secret; use k8s_openapi::api::apps::v1::Deployment; ``` Add a struct for internal API responses: ```rust #[derive(Deserialize)] struct TxtUsage { paste_count: i64, storage_bytes: i64, } #[derive(Deserialize)] struct PicsUsage { image_count: i64, storage_bytes: i64, } ``` **Step 2: Expand DashboardTemplate** Replace the existing struct with: ```rust #[derive(Template, WebTemplate)] #[template(path = "dashboard.html")] pub struct DashboardTemplate { pub page: PageContext, pub has_bouncer: bool, pub is_pro: bool, pub plan: String, pub networks: i64, pub channels: i64, pub messages_relayed: i64, pub bouncer_uptime: String, pub paste_count: i64, pub paste_storage: String, pub pic_count: i64, pub pic_storage: String, } ``` **Step 3: Add helper to format bytes** ```rust fn format_bytes(bytes: i64) -> String { if bytes < 1024 { format!("{bytes} B") } else if bytes < 1024 * 1024 { format!("{:.1} KB", bytes as f64 / 1024.0) } else if bytes < 1024 * 1024 * 1024 { format!("{:.1} MB", bytes as f64 / (1024.0 * 1024.0)) } else { format!("{:.1} GB", bytes as f64 / (1024.0 * 1024.0 * 1024.0)) } } ``` **Step 4: Expand the handler to fetch stats** Replace the index handler with: ```rust pub async fn index( State(state): State, headers: HeaderMap, AuthUser(user): AuthUser, ) -> impl IntoResponse { let bouncers: Api = Api::namespaced(state.kube.clone(), &state.namespace); let lp = ListParams::default().labels(&format!("irc.now/owner={}", user.sub)); let bouncer_list = bouncers.list(&lp).await.unwrap_or_default(); let has_bouncer = !bouncer_list.items.is_empty(); let plan = user.plan.unwrap_or_else(|| "free".to_string()); let is_pro = plan == "pro"; // Fetch bouncer stats let (mut networks, mut channels, mut messages_relayed) = (0i64, 0i64, 0i64); let mut bouncer_uptime = String::from("--"); if has_bouncer { let secret_api: Api = Api::namespaced(state.kube.clone(), &state.namespace); for b in &bouncer_list.items { let name = match &b.metadata.name { Some(n) => n, None => continue, }; // Get bouncer uptime from deployment let deploy_api: Api = Api::namespaced(state.kube.clone(), &state.namespace); if let Ok(deploy) = deploy_api.get(name).await { if let Some(created) = deploy.metadata.creation_timestamp.as_ref() { let age = chrono::Utc::now() - created.0; let days = age.num_days(); let hours = age.num_hours() % 24; bouncer_uptime = if days > 0 { format!("{days}d {hours}h") } else { format!("{hours}h") }; } } // Get soju DB stats let uri = match secret_api.get(&format!("{name}-db")).await { Ok(secret) => secret.data.as_ref() .and_then(|d| d.get("uri")) .map(|b| String::from_utf8_lossy(&b.0).to_string()), Err(_) => continue, }; let Some(uri) = uri else { continue }; if let Ok((client, connection)) = tokio_postgres::connect(&uri, tokio_postgres::NoTls).await { tokio::spawn(async move { let _ = connection.await; }); if let Ok(row) = client.query_one(r#"SELECT COUNT(*) FROM "Network""#, &[]).await { networks += row.get::<_, i64>(0); } if let Ok(row) = client.query_one(r#"SELECT COUNT(*) FROM "Channel""#, &[]).await { channels += row.get::<_, i64>(0); } if let Ok(row) = client.query_one(r#"SELECT COUNT(*) FROM "MessageTarget""#, &[]).await { messages_relayed += row.get::<_, i64>(0); } } } } // Fetch txt/pics usage via internal APIs let (paste_count, paste_storage_bytes) = match state.http_client .get(format!("{}/api/usage/{}", state.txt_url, user.sub)) .send() .await .and_then(|r| Ok(r)) { Ok(resp) => match resp.json::().await { Ok(u) => (u.paste_count, u.storage_bytes), Err(_) => (0, 0), }, Err(_) => (0, 0), }; let (pic_count, pic_storage_bytes) = match state.http_client .get(format!("{}/api/usage/{}", state.pics_url, user.sub)) .send() .await { Ok(resp) => match resp.json::().await { Ok(u) => (u.image_count, u.storage_bytes), Err(_) => (0, 0), }, Err(_) => (0, 0), }; let page = PageContext::from_request(&headers, &state.announcement); let clear = page.flash.is_some(); let template = DashboardTemplate { page, has_bouncer, is_pro, plan, networks, channels, messages_relayed, bouncer_uptime, paste_count, paste_storage: format_bytes(paste_storage_bytes), pic_count, pic_storage: format_bytes(pic_storage_bytes), }; if clear { (axum::response::AppendHeaders([flash::clear_flash()]), template).into_response() } else { template.into_response() } } ``` **Step 5: Update the dashboard template** Replace `crates/web-api/templates/dashboard.html`: ```html {% extends "base.html" %} {% block title %}dashboard - irc.now{% endblock %} {% block banner %}{% include "partials/announcement.html" %}{% endblock %} {% block flash %}{% include "partials/flash.html" %}{% endblock %} {% block content %}

dashboard

bouncer

{% if has_bouncer %}

your bouncer is running

{{ networks }} networks
{{ channels }} channels
{{ messages_relayed }} messages
{{ bouncer_uptime }} uptime
manage bouncers {% else %}

no bouncer configured

create bouncer {% endif %}

content

{{ paste_count }} pastes ({{ paste_storage }})
{{ pic_count }} images ({{ pic_storage }})

plan

{{ plan }}

{% if !is_pro %} upgrade to pro {% endif %}

account

manage your display name, email, and password.

edit profile
{% endblock %} ``` **Step 6: Add stats-grid CSS** In `crates/web-api/static/portal.css`, add: ```css .stats-grid { display: grid; grid-template-columns: repeat(auto-fit, minmax(100px, 1fr)); gap: 1rem; margin: 1rem 0; } .stat { text-align: center; } .stat-value { display: block; font-size: 1.5rem; font-weight: bold; } .stat-label { display: block; font-size: 0.85rem; opacity: 0.7; } ``` **Step 7: Verify it compiles** Run: `cargo check -p irc-now-web-api` Likely issues: need `use chrono;` for uptime calculation. Add `chrono = "0.4"` to web-api Cargo.toml if not present. **Step 8: Commit** ```bash git add crates/web-api/src/routes/dashboard.rs crates/web-api/templates/dashboard.html crates/web-api/static/portal.css git commit -m "feat: expand dashboard with per-user bouncer and content stats" ``` --- ### Task 11: Update Grafana dashboard with new panels **Files:** - Modify: `deploy/monitoring/grafana-dashboard.yaml` **Step 1: Add new panels to the admin dashboard JSON** Add these panels after the existing panel id 14 (HTTP Latency), inside the `"panels"` array: ```json { "id": 15, "title": "MRR", "type": "stat", "gridPos": { "h": 4, "w": 4, "x": 12, "y": 4 }, "targets": [{ "expr": "irc_now_mrr_cents / 100", "legendFormat": "MRR" }], "fieldConfig": { "defaults": { "unit": "currencyUSD", "thresholds": { "steps": [{ "color": "green", "value": null }] } } } }, { "id": 16, "title": "Subscriptions", "type": "stat", "gridPos": { "h": 4, "w": 4, "x": 16, "y": 4 }, "targets": [{ "expr": "irc_now_subscriptions_active", "legendFormat": "Active" }], "fieldConfig": { "defaults": { "thresholds": { "steps": [{ "color": "green", "value": null }] } } } }, { "id": 17, "title": "Signups (7d)", "type": "stat", "gridPos": { "h": 4, "w": 4, "x": 20, "y": 4 }, "targets": [{ "expr": "irc_now_signups_7d", "legendFormat": "7d" }], "fieldConfig": { "defaults": { "thresholds": { "steps": [{ "color": "blue", "value": null }] } } } }, { "id": 18, "title": "Conversion Rate (30d)", "type": "stat", "gridPos": { "h": 4, "w": 6, "x": 0, "y": 32 }, "targets": [{ "expr": "irc_now_conversion_rate_30d * 100", "legendFormat": "Rate" }], "fieldConfig": { "defaults": { "unit": "percent", "thresholds": { "steps": [{ "color": "yellow", "value": null }, { "color": "green", "value": 5 }] } } } }, { "id": 19, "title": "Churn Rate (30d)", "type": "stat", "gridPos": { "h": 4, "w": 6, "x": 6, "y": 32 }, "targets": [{ "expr": "irc_now_churn_rate_30d * 100", "legendFormat": "Rate" }], "fieldConfig": { "defaults": { "unit": "percent", "thresholds": { "steps": [{ "color": "green", "value": null }, { "color": "yellow", "value": 5 }, { "color": "red", "value": 10 }] } } } }, { "id": 20, "title": "Revenue Over Time", "type": "timeseries", "gridPos": { "h": 8, "w": 12, "x": 12, "y": 32 }, "targets": [ { "expr": "irc_now_mrr_cents / 100", "legendFormat": "MRR" }, { "expr": "irc_now_subscriptions_active", "legendFormat": "Subscriptions" } ], "fieldConfig": { "defaults": { "unit": "currencyUSD" } } } ``` **Step 2: Commit** ```bash git add deploy/monitoring/grafana-dashboard.yaml git commit -m "feat: add MRR, conversion, churn panels to Grafana dashboard" ``` --- ### Task 12: Deploy and verify **Step 1: Apply the events migration to accounts-db** ```bash oc exec -n irc-josie-cloud accounts-db-1 -- psql -U app -d accounts -f - < sql/004_create_events.sql ``` If `last_login_at` migration hasn't been applied: ```bash oc exec -n irc-josie-cloud accounts-db-1 -- psql -U app -d accounts -f - < sql/005_add_last_login_at.sql ``` **Step 2: Build and deploy txt + pics (with new usage endpoints)** ```bash tar czf /tmp/build.tar.gz --exclude='target' --exclude='.git' \ --exclude='irc-now-landing-page' --exclude='status' --exclude='design' \ --exclude='./docs' --exclude='notes' Cargo.toml Cargo.lock crates/ oc start-build txt --from-archive=/tmp/build.tar.gz -n irc-josie-cloud --follow oc start-build pics --from-archive=/tmp/build.tar.gz -n irc-josie-cloud --follow ``` **Step 3: Build and deploy web-api** ```bash oc start-build web-api --from-archive=/tmp/build.tar.gz -n irc-josie-cloud --follow ``` Set the new env vars on the web-api deployment: ```bash oc set env deployment/web-api TXT_INTERNAL_URL=http://txt:8080 PICS_INTERNAL_URL=http://pics:8080 -n irc-josie-cloud ``` **Step 4: Apply Grafana dashboard** ```bash oc apply -f deploy/monitoring/grafana-dashboard.yaml -n irc-josie-cloud ``` **Step 5: Verify** - Log in to my.irc.now/dashboard -- check that stats cards render - Check my.irc.now/metrics -- verify new gauges appear - Check stats.irc.now -- verify new Grafana panels show data - Create a test event by logging out and back in, verify `events` table has rows: ```bash oc exec -n irc-josie-cloud accounts-db-1 -- psql -U app -d accounts -c "SELECT * FROM events ORDER BY created_at DESC LIMIT 5" ```