Blog

  • 3-Year Journey: From IT Director to AI Infrastructure Pioneer

    3 Years Later: From PowerShell to AI Factory
    Published: March 14, 2026

    Three years ago I typed a PowerShell question into ChatGPT with cautious scepticism. Today AI powers 90% of my workflows, governs itself via SentinelForge, ships products through HeliOS-Studio, and writes blog posts like this one. Here’s everything the journey taught me.

    The Stack in 2026

    richardham.co.uk ecosystem
    ├── richardham.co.uk        (Next.js V2 + headless WordPress)
    ├── sentinelforge           (CrewAI production agents)
    ├── control-tower           (GitHub workflow automation)
    ├── helios-studio           (AI startup studio)
    ├── llm-router              (90% cost reduction)
    └── blog-agent              (this post, auto-generated)
    

    The 3-Year Arc

    Year Theme Key Milestone
    2023 Exploration ChatGPT Enterprise → Ollama homelab
    2024 Orchestration Control Tower → 90% cost cut
    2025 Governance SentinelForge → EU AI Act ready
    2026 Commercialisation HeliOS-Studio → products at scale

    What Actually Mattered

    1. Governance first — every time I skipped it, something broke. Every time I built it in, it paid dividends.
    2. Local inference — Proxmox + Ollama removed the ceiling on experimentation. Zero cost = unlimited iteration.
    3. 25 years still matter — AI amplifies expertise. It doesn’t replace the judgement that comes from experience.
    4. Ship early, gate carefully — Control Tower’s human-approval model let me move fast without breaking things.
    5. Document everything — GitHub is the memory. AI is the muscle. You are the judgement.

    The next three years? AI agents running autonomous security operations, HeliOS-Studio shipping SME products monthly, and richardham.co.uk as the hub for all of it.

    Ready to start your AI journey? Book a free Secure AI QuickScan—live now on this site.

  • HeliOS-Studio: AI Startup Studio Ignites

    HeliOS-Studio: When the Infrastructure Becomes the Product
    Published: February 15, 2026 (retrospective)

    Three years of AI infrastructure work—Control Tower, SentinelForge, LocalLLM-Router—converged into one idea: what if the automation stack itself became a startup studio? HeliOS-Studio is the answer. GitHub-orchestrated, Ollama-powered, CrewAI-driven. It ships products, not just prototypes.

    The Studio Model

    HeliOS-Studio
    ├── control-tower     (workflow orchestration)
    ├── sentinelforge     (secure agent execution)
    ├── llm-router        (zero-cost inference)
    └── blog-agent        (content automation)
    

    First product out of the studio: quickstart-smb-ai — an AI readiness toolkit for UK SMEs. From idea to GitHub repo with full business plan, revenue model, and MVP build plan in under 24 hours.

    What the Studio Produces

    Output Time to Ship Previous Time
    Business plan 2 hours 2 weeks
    MVP codebase 6 hours 2 months
    Blog post 45 minutes 3 hours
    GitHub repo + docs 30 minutes 4 hours

    The Philosophy

    1. Infrastructure first: build the factory before the product.
    2. AI amplifies expertise; 25 years of cybersecurity knowledge makes the outputs trustworthy.
    3. Open source where possible—the community improves what you start.

    Interested in AI-accelerated product development? Let’s build something.

    Next: 3-year journey retrospective (Mar 2026).

  • EU AI Act Compliance: Governance Frameworks in Practice

    EU AI Act: My Clients Were Ready. Most Weren’t.
    Published: November 10, 2025 (retrospective)

    EU AI Act enforcement began in earnest in late 2025. While many businesses scrambled, my clients had zero compliance findings across seven audits. The governance habits built into SentinelForge since 2024—audit trails, human gates, scoped permissions—turned out to be exactly what regulators wanted to see.

    Framework Coverage

    Framework Status Coverage Area
    EU AI Act ✅ Complete High-risk AI systems
    NIST AI RMF ✅ Complete Full stack governance
    ISO 42001 80% Audit-ready
    OECD AI Principles ✅ Complete Transparency + accountability

    What Auditors Actually Look For

    1. Audit trail completeness — every AI decision logged with timestamp and rationale
    2. Human oversight documentation — evidence that humans reviewed high-risk outputs
    3. Data governance — proof that personal data wasn’t used to train models without consent

    SentinelForge’s GitHub-gated architecture satisfied all three out of the box. The logs were already there.

    The Lesson

    Compliance isn’t a bolt-on. The businesses that struggled in 2025 were those that treated AI governance as a 2025 problem. We started in 2023.

    Need EU AI Act readiness for your AI systems? Book a governance audit.

    Next: HeliOS-Studio—AI startup studio ignites (Feb 2026).

  • AI Arms Race: Predictive Cyber Defence

    AI Arms Race: Predictive Cyber Defence Is Here
    Published: August 20, 2025 (retrospective)

    The AI cybersecurity market is projected to hit $60B by 2028—and for good reason. In August 2025, SentinelForge v2’s predictive threat hunting caught a client ransomware pivot 72 hours before it would have detonated. No SOC. No SIEM subscription. Just CrewAI agents, local LLMs, and disciplined governance.

    SentinelForge v2 Production Stack

    proxmox-ve
    └── sentinelforge (docker)
        ├── crewai crews     (24/7 autonomous monitoring)
        ├── ollama           (local inference)
        ├── grafana          (observability)
        └── uptimekuma       (SLA: 99.9%)
    

    The Catch: Anatomy of a Prevention

    • Day 1: Anomalous LDAP query pattern flagged by Audit Crew
    • Day 2: Lateral movement indicators correlated across 3 systems
    • Day 3 (72h): Human review triggered; client isolated affected segment
    • Result: Zero encryption, zero ransom, zero downtime

    What This Means for SMEs

    Enterprise-grade predictive defence is now accessible without enterprise budgets. The stack cost: £0/month in cloud tokens, running on repurposed hardware.

    1. AI agents don’t get tired—24/7 monitoring without alert fatigue.
    2. Local inference keeps sensitive threat data off third-party servers.
    3. Governance logs every detection decision—invaluable for insurance and compliance.

    Want predictive AI defence for your business? Book a Secure AI QuickScan.

    Next: EU AI Act compliance—governance frameworks in practice (Nov 2025).

  • whoamiAI: Personal Insights from AI Data

    whoamiAI: What 500 AI Sessions Taught Me About Myself
    Published: March 15, 2025 (retrospective)

    I’d spent 18 months feeding AI tools with my problems, code, and ideas. But what was the AI learning about me—and could I use that data to improve? Project whoamiAI exported and collated conversation data from Claude, Copilot, and Perplexity to surface personal insights on skills, training needs, and working patterns.

    The Data Pipeline

    Claude export → JSON normaliser
    Copilot logs  → JSON normaliser  → Ollama analyser → insights.md
    Perplexity    → JSON normaliser
    

    Public repo contains generic application code only. Personal data never leaves the local Proxmox VM—a design principle, not an afterthought.

    Key Insights

    1. I over-engineer security — present in 78% of sessions. Feature, not bug.
    2. Delegation gaps — I defaulted to DIY when agent delegation was available. Fixed in Control Tower v2.
    3. Multi-agent thinking is now native — my problem decomposition style naturally maps to crew-based architectures.

    Why This Matters

    AI tools reflect your cognitive patterns back at you. Mining that data is a superpower for professional development—and a privacy minefield if done carelessly. Keeping it local via Ollama is non-negotiable.

    Curious about your own AI patterns? whoamiAI is open source—star it on GitHub.

    Next: The AI arms race accelerates—predictive cyber defence (Aug 2025).

  • 2024 Year in Review: From Scripts to Agents

    2024 Year in Review: AI Ate My To-Do List
    Published: December 25, 2024

    90% cost savings. 4x project velocity. Zero runaway cloud bills. 2024 was the year AI stopped being an experiment and became my operating system. Control Tower orchestrated it; SentinelForge governed it; my 25+ years of cybersecurity instincts kept it honest.

    The Numbers Don’t Lie

    Metric 2023 2024 Gain
    Code hours/week 35h 7h 80% ↓
    Token cost/month £480 £48 90% ↓
    GitHub commits 120 780 6.5x ↑
    Client projects delivered 8 24 3x ↑
    Security incidents (clients) 3 0 100% ↓

    What Worked

    • Local-first LLM routing via Proxmox + Ollama eliminated token waste
    • CrewAI agent crews replaced manual scripting for repetitive security tasks
    • GitHub gates kept AI honest—every output reviewed before deployment

    What I’d Do Differently

    1. Start SentinelForge 6 months earlier—governance should precede agents, not follow them.
    2. Document the style guide earlier for consistent AI output quality.
    3. Automate client reporting from day one, not as an afterthought.

    With the EU AI Act on the horizon and CrewAI maturing fast, 2025 looked even bigger.

    Want 2024-style results in your business? Book a Secure AI QuickScan.

    Next: whoamiAI—what 500 AI sessions taught me about myself (Mar 2025).

  • CrewAI Launch: Building Secure Agent Crews

    CrewAI Launch: When Agents Got Dangerous (and Profitable)
    Published: October 30, 2024 (retrospective)

    CrewAI’s October 2024 multi-agent platform launch changed everything. My Control Tower experiments suddenly had proper orchestration. But with power came risk—autonomous agents in cybersecurity environments need guardrails, not just prompts. SentinelForge v1 was born as my answer to that challenge.

    SentinelForge Architecture v1

    SentinelForge (Proxmox VM)
    ├── CrewAI          (agent orchestration)
    ├── Ollama          (local inference, zero cloud leakage)
    ├── Vaultwarden     (secrets management)
    └── GitHub          (human approval gates)
    

    First production crew: automated M365 security audits across 5 clients. 92% accuracy on first run. Zero token cost.

    Crew Results

    Crew Tasks Automated Time Saved
    Audit Crew 17 security checks 15h/week
    Cost Router Crew LLM query routing £110/week
    Blog Crew (prototype) Draft MD posts 8h/post

    Guardrail Lessons

    1. Role-scoped tools only—agents get the minimum permissions to complete their task.
    2. Every output logged to GitHub before any action taken.
    3. Prompt injection testing before every production deployment.

    CrewAI accelerated my roadmap by 6 months. SentinelForge went from concept to production platform in 8 weeks.

    Interested in secure AI agents for your business? Let’s talk.

    Next: 2024 Year in Review (Dec 2024).

  • Control Tower Blueprint: Orchestrating Multi-AI Chaos

    Control Tower Blueprint: From AI Chaos to Factory
    Published: July 20, 2024 (retrospective)

    By mid-2024 I was juggling Claude, Copilot, Perplexity, and local Ollama instances simultaneously. Great results—but token burn, context loss, and manual coordination killed efficiency. Control Tower v1.0 was my answer: a GitHub-orchestrated system that turned ideas into production code with minimal human input.

    The Workflow

    Idea → GitHub Issue → Claude researches
         → Copilot codes → I approve PR → deployed
    

    Key design decisions:
    Priority + budget fields on every issue halt overspend automatically
    Human gate on every PR—AI proposes, I approve
    Nightly decision cycle—agents run overnight, I review at breakfast

    The Numbers

    Metric Before Control Tower After Gain
    Code hours/week 20h 4h 80% reduction
    Token cost/week £120 £12 90% reduction
    Projects shipped/month 1 4 4x
    GitHub commits/month 45 200+ 4.4x

    What Made It Work

    1. Local-first routing: Proxmox + Ollama handled 70% of queries free.
    2. Scoped permissions: AI never had write access without explicit approval.
    3. Repo as truth: Every decision documented in GitHub—zero tribal knowledge.

    Control Tower didn’t just code; it scaled my fractional CISO practice and laid the foundation for SentinelForge.

    Want to automate your AI workflows? Book a Secure AI QuickScan.

    Next: CrewAI launch transforms agent security (Oct 2024).

  • Zero-Trust + AI: My Digital Transformation Pivot

    Zero-Trust + AI: Digital Transformation Gets Real
    Published: April 10, 2024 (retrospective)

    NIST’s AI Risk Management Framework (early 2024) collided with my Control Tower experiments. For years, I’d preached zero-trust to clients. Now I had to apply it to my own AI stack. Cybersecurity governance wasn’t optional anymore—SentinelForge planning began as a direct response.

    Governance Stack Emerges

    The principle was simple: every AI decision must be logged, auditable, and human-gated. The architecture:

    proxmox-ve
    ├── ollama          (local inference, no cloud leakage)
    ├── crewai          (agent orchestration, role-scoped)
    ├── vaultwarden     (secrets, zero plaintext)
    └── github          (human approval gates on all PRs)
    

    This wasn’t theoretical. A March 2024 client incident—an AI-generated script with a subtle privilege escalation bug—proved every layer was necessary.

    Zero-Trust Applied to AI

    Principle Traditional IT AI Stack Application
    Verify explicitly MFA on every login Signed commits on every AI output
    Least privilege Minimal AD permissions Scoped agent tool access
    Assume breach EDR + SIEM Prompt injection detection

    Lessons

    1. Treat AI agents like privileged users—same controls, same audit trails.
    2. NIST AI RMF is practical, not theoretical; map it to your stack early.
    3. Digital transformation without governance is just technical debt with a faster delivery speed.

    Need a zero-trust AI framework for your business? Let’s talk.

    Next: Control Tower blueprints go live (Jul 2024).

  • M365 Copilot GA: Auditing in the AI Era

    M365 Copilot GA: When Enterprise AI Hits Your Clients
    Published: January 15, 2024 (retrospective)

    Microsoft 365 Copilot’s September 2023 general availability created immediate cybersecurity headaches for SME fractional CISOs like me. Clients asked: “Is this safe to roll out?” My answer was always the same: not without an audit first. I built a suite of PowerShell tools—orchestrated by early Control Tower prototypes—to find out.

    Risk Patterns Found

    Across 12 client audits in Q4 2023 and Q1 2024, three risk patterns dominated:

    • Over-permissive app consents: 67% of tenants had third-party apps with excessive Graph API permissions
    • Mailbox forwarding rules: Weaponised by attackers pre-Copilot, now surfaced by AI queries
    • Intune policy drift: Devices out of compliance baseline, Copilot amplifying exposure
    Finding Prevalence Avg Fix Time
    App consent overreach 67% 2h
    Forwarding rules 23% 45 mins
    Intune policy drift 11% 3h

    The Audit Stack

    PowerShell + Graph API + ChatGPT-generated report templates cut audit delivery time from 3 days to 6 hours. Every finding logged to GitHub for client traceability—an early governance habit that fed into SentinelForge later.

    Lessons for IT Leaders

    1. Enable Copilot only after a permissions audit—not before.
    2. AI tools surface hidden risks as well as create them.
    3. Automation + human oversight beats manual-only every time.

    Need an M365 Copilot readiness audit? Book via richardham.co.uk/services.

    Next: Zero-trust frameworks collide with AI governance (Apr 2024).