Tag: AI

  • HeliOS-Studio: AI Startup Studio Ignites

    HeliOS-Studio: When the Infrastructure Becomes the Product
    Published: February 15, 2026 (retrospective)

    Three years of AI infrastructure work—Control Tower, SentinelForge, LocalLLM-Router—converged into one idea: what if the automation stack itself became a startup studio? HeliOS-Studio is the answer. GitHub-orchestrated, Ollama-powered, CrewAI-driven. It ships products, not just prototypes.

    The Studio Model

    HeliOS-Studio
    ├── control-tower     (workflow orchestration)
    ├── sentinelforge     (secure agent execution)
    ├── llm-router        (zero-cost inference)
    └── blog-agent        (content automation)
    

    First product out of the studio: quickstart-smb-ai — an AI readiness toolkit for UK SMEs. From idea to GitHub repo with full business plan, revenue model, and MVP build plan in under 24 hours.

    What the Studio Produces

    Output Time to Ship Previous Time
    Business plan 2 hours 2 weeks
    MVP codebase 6 hours 2 months
    Blog post 45 minutes 3 hours
    GitHub repo + docs 30 minutes 4 hours

    The Philosophy

    1. Infrastructure first: build the factory before the product.
    2. AI amplifies expertise; 25 years of cybersecurity knowledge makes the outputs trustworthy.
    3. Open source where possible—the community improves what you start.

    Interested in AI-accelerated product development? Let’s build something.

    Next: 3-year journey retrospective (Mar 2026).

  • EU AI Act Compliance: Governance Frameworks in Practice

    EU AI Act: My Clients Were Ready. Most Weren’t.
    Published: November 10, 2025 (retrospective)

    EU AI Act enforcement began in earnest in late 2025. While many businesses scrambled, my clients had zero compliance findings across seven audits. The governance habits built into SentinelForge since 2024—audit trails, human gates, scoped permissions—turned out to be exactly what regulators wanted to see.

    Framework Coverage

    Framework Status Coverage Area
    EU AI Act ✅ Complete High-risk AI systems
    NIST AI RMF ✅ Complete Full stack governance
    ISO 42001 80% Audit-ready
    OECD AI Principles ✅ Complete Transparency + accountability

    What Auditors Actually Look For

    1. Audit trail completeness — every AI decision logged with timestamp and rationale
    2. Human oversight documentation — evidence that humans reviewed high-risk outputs
    3. Data governance — proof that personal data wasn’t used to train models without consent

    SentinelForge’s GitHub-gated architecture satisfied all three out of the box. The logs were already there.

    The Lesson

    Compliance isn’t a bolt-on. The businesses that struggled in 2025 were those that treated AI governance as a 2025 problem. We started in 2023.

    Need EU AI Act readiness for your AI systems? Book a governance audit.

    Next: HeliOS-Studio—AI startup studio ignites (Feb 2026).

  • AI Arms Race: Predictive Cyber Defence

    AI Arms Race: Predictive Cyber Defence Is Here
    Published: August 20, 2025 (retrospective)

    The AI cybersecurity market is projected to hit $60B by 2028—and for good reason. In August 2025, SentinelForge v2’s predictive threat hunting caught a client ransomware pivot 72 hours before it would have detonated. No SOC. No SIEM subscription. Just CrewAI agents, local LLMs, and disciplined governance.

    SentinelForge v2 Production Stack

    proxmox-ve
    └── sentinelforge (docker)
        ├── crewai crews     (24/7 autonomous monitoring)
        ├── ollama           (local inference)
        ├── grafana          (observability)
        └── uptimekuma       (SLA: 99.9%)
    

    The Catch: Anatomy of a Prevention

    • Day 1: Anomalous LDAP query pattern flagged by Audit Crew
    • Day 2: Lateral movement indicators correlated across 3 systems
    • Day 3 (72h): Human review triggered; client isolated affected segment
    • Result: Zero encryption, zero ransom, zero downtime

    What This Means for SMEs

    Enterprise-grade predictive defence is now accessible without enterprise budgets. The stack cost: £0/month in cloud tokens, running on repurposed hardware.

    1. AI agents don’t get tired—24/7 monitoring without alert fatigue.
    2. Local inference keeps sensitive threat data off third-party servers.
    3. Governance logs every detection decision—invaluable for insurance and compliance.

    Want predictive AI defence for your business? Book a Secure AI QuickScan.

    Next: EU AI Act compliance—governance frameworks in practice (Nov 2025).

  • whoamiAI: Personal Insights from AI Data

    whoamiAI: What 500 AI Sessions Taught Me About Myself
    Published: March 15, 2025 (retrospective)

    I’d spent 18 months feeding AI tools with my problems, code, and ideas. But what was the AI learning about me—and could I use that data to improve? Project whoamiAI exported and collated conversation data from Claude, Copilot, and Perplexity to surface personal insights on skills, training needs, and working patterns.

    The Data Pipeline

    Claude export → JSON normaliser
    Copilot logs  → JSON normaliser  → Ollama analyser → insights.md
    Perplexity    → JSON normaliser
    

    Public repo contains generic application code only. Personal data never leaves the local Proxmox VM—a design principle, not an afterthought.

    Key Insights

    1. I over-engineer security — present in 78% of sessions. Feature, not bug.
    2. Delegation gaps — I defaulted to DIY when agent delegation was available. Fixed in Control Tower v2.
    3. Multi-agent thinking is now native — my problem decomposition style naturally maps to crew-based architectures.

    Why This Matters

    AI tools reflect your cognitive patterns back at you. Mining that data is a superpower for professional development—and a privacy minefield if done carelessly. Keeping it local via Ollama is non-negotiable.

    Curious about your own AI patterns? whoamiAI is open source—star it on GitHub.

    Next: The AI arms race accelerates—predictive cyber defence (Aug 2025).

  • 2024 Year in Review: From Scripts to Agents

    2024 Year in Review: AI Ate My To-Do List
    Published: December 25, 2024

    90% cost savings. 4x project velocity. Zero runaway cloud bills. 2024 was the year AI stopped being an experiment and became my operating system. Control Tower orchestrated it; SentinelForge governed it; my 25+ years of cybersecurity instincts kept it honest.

    The Numbers Don’t Lie

    Metric 2023 2024 Gain
    Code hours/week 35h 7h 80% ↓
    Token cost/month £480 £48 90% ↓
    GitHub commits 120 780 6.5x ↑
    Client projects delivered 8 24 3x ↑
    Security incidents (clients) 3 0 100% ↓

    What Worked

    • Local-first LLM routing via Proxmox + Ollama eliminated token waste
    • CrewAI agent crews replaced manual scripting for repetitive security tasks
    • GitHub gates kept AI honest—every output reviewed before deployment

    What I’d Do Differently

    1. Start SentinelForge 6 months earlier—governance should precede agents, not follow them.
    2. Document the style guide earlier for consistent AI output quality.
    3. Automate client reporting from day one, not as an afterthought.

    With the EU AI Act on the horizon and CrewAI maturing fast, 2025 looked even bigger.

    Want 2024-style results in your business? Book a Secure AI QuickScan.

    Next: whoamiAI—what 500 AI sessions taught me about myself (Mar 2025).

  • Control Tower Blueprint: Orchestrating Multi-AI Chaos

    Control Tower Blueprint: From AI Chaos to Factory
    Published: July 20, 2024 (retrospective)

    By mid-2024 I was juggling Claude, Copilot, Perplexity, and local Ollama instances simultaneously. Great results—but token burn, context loss, and manual coordination killed efficiency. Control Tower v1.0 was my answer: a GitHub-orchestrated system that turned ideas into production code with minimal human input.

    The Workflow

    Idea → GitHub Issue → Claude researches
         → Copilot codes → I approve PR → deployed
    

    Key design decisions:
    Priority + budget fields on every issue halt overspend automatically
    Human gate on every PR—AI proposes, I approve
    Nightly decision cycle—agents run overnight, I review at breakfast

    The Numbers

    Metric Before Control Tower After Gain
    Code hours/week 20h 4h 80% reduction
    Token cost/week £120 £12 90% reduction
    Projects shipped/month 1 4 4x
    GitHub commits/month 45 200+ 4.4x

    What Made It Work

    1. Local-first routing: Proxmox + Ollama handled 70% of queries free.
    2. Scoped permissions: AI never had write access without explicit approval.
    3. Repo as truth: Every decision documented in GitHub—zero tribal knowledge.

    Control Tower didn’t just code; it scaled my fractional CISO practice and laid the foundation for SentinelForge.

    Want to automate your AI workflows? Book a Secure AI QuickScan.

    Next: CrewAI launch transforms agent security (Oct 2024).

  • Zero-Trust + AI: My Digital Transformation Pivot

    Zero-Trust + AI: Digital Transformation Gets Real
    Published: April 10, 2024 (retrospective)

    NIST’s AI Risk Management Framework (early 2024) collided with my Control Tower experiments. For years, I’d preached zero-trust to clients. Now I had to apply it to my own AI stack. Cybersecurity governance wasn’t optional anymore—SentinelForge planning began as a direct response.

    Governance Stack Emerges

    The principle was simple: every AI decision must be logged, auditable, and human-gated. The architecture:

    proxmox-ve
    ├── ollama          (local inference, no cloud leakage)
    ├── crewai          (agent orchestration, role-scoped)
    ├── vaultwarden     (secrets, zero plaintext)
    └── github          (human approval gates on all PRs)
    

    This wasn’t theoretical. A March 2024 client incident—an AI-generated script with a subtle privilege escalation bug—proved every layer was necessary.

    Zero-Trust Applied to AI

    Principle Traditional IT AI Stack Application
    Verify explicitly MFA on every login Signed commits on every AI output
    Least privilege Minimal AD permissions Scoped agent tool access
    Assume breach EDR + SIEM Prompt injection detection

    Lessons

    1. Treat AI agents like privileged users—same controls, same audit trails.
    2. NIST AI RMF is practical, not theoretical; map it to your stack early.
    3. Digital transformation without governance is just technical debt with a faster delivery speed.

    Need a zero-trust AI framework for your business? Let’s talk.

    Next: Control Tower blueprints go live (Jul 2024).

  • M365 Copilot GA: Auditing in the AI Era

    M365 Copilot GA: When Enterprise AI Hits Your Clients
    Published: January 15, 2024 (retrospective)

    Microsoft 365 Copilot’s September 2023 general availability created immediate cybersecurity headaches for SME fractional CISOs like me. Clients asked: “Is this safe to roll out?” My answer was always the same: not without an audit first. I built a suite of PowerShell tools—orchestrated by early Control Tower prototypes—to find out.

    Risk Patterns Found

    Across 12 client audits in Q4 2023 and Q1 2024, three risk patterns dominated:

    • Over-permissive app consents: 67% of tenants had third-party apps with excessive Graph API permissions
    • Mailbox forwarding rules: Weaponised by attackers pre-Copilot, now surfaced by AI queries
    • Intune policy drift: Devices out of compliance baseline, Copilot amplifying exposure
    Finding Prevalence Avg Fix Time
    App consent overreach 67% 2h
    Forwarding rules 23% 45 mins
    Intune policy drift 11% 3h

    The Audit Stack

    PowerShell + Graph API + ChatGPT-generated report templates cut audit delivery time from 3 days to 6 hours. Every finding logged to GitHub for client traceability—an early governance habit that fed into SentinelForge later.

    Lessons for IT Leaders

    1. Enable Copilot only after a permissions audit—not before.
    2. AI tools surface hidden risks as well as create them.
    3. Automation + human oversight beats manual-only every time.

    Need an M365 Copilot readiness audit? Book via richardham.co.uk/services.

    Next: Zero-trust frameworks collide with AI governance (Apr 2024).

  • GitHub AI Boom: 65k+ Projects Spark My Homelab

    GitHub AI Boom: My Homelab Goes Local
    Published: November 20, 2023 (retrospective)

    November 2023’s generative AI explosion on GitHub—repos tripled to 65k+—pushed me from cloud experimentation to Proxmox homelab builds. ChatGPT Enterprise handled scripting, but token costs and privacy concerns demanded local LLMs. Enter Ollama’s early hype.

    From Cloud to Containers

    GitHub’s AI project surge validated my pivot: Ollama promised free, private inference on Proxmox VMs. First homelab setup:
    – Llama2 on RTX 3090 GPU passthrough
    – QNAP NFS for model storage
    – Uptime Kuma monitoring from day one

    Test Latency Cost Verdict
    GPT-4 (cloud) 2.1s $0.03/1k tokens Reference
    Ollama Llama2 1.8s $0 Winner
    Local Router (planned) 1.2s $0 Future

    First month: 87% cost savings routing simple queries locally. Cloud reserved for complex reasoning tasks only.

    Early Router Sketches

    The LocalLLM-Router concept was born here: route 70% of queries to Ollama, reserve cloud API for edge cases. This became the foundation of Control Tower’s cost engine a year later.

    Lessons

    1. Proxmox GPU passthrough is powerful but finicky—document everything.
    2. Model size matters: 7B handles most IT tasks; 13B for nuanced cybersecurity analysis.
    3. Privacy wins: Sensitive client data never leaves the homelab.

    Want to build your own AI homelab? Let’s talk.

    Next: M365 Copilot GA forces enterprise governance thinking (Jan 2024).

  • ChatGPT Enterprise: My First Steps into AI-Assisted IT

    ChatGPT Enterprise: My First Steps into AI-Assisted IT
    Published: September 25, 2023 (retrospective)

    2023 marked my pivot from 25+ years of pure IT/cybersecurity scripting to blending AI into daily workflows—starting with OpenAI’s ChatGPT Enterprise launch in late August. As a fractional IT Director managing M365 environments and Proxmox homelabs, I was sceptical: could AI handle PowerShell automation without hallucinating disasters? This post recaps those early experiments, wins, and the spark that ignited my AI journey.

    The Catalyst: Enterprise AI Goes Live

    ChatGPT Enterprise dropped on August 28, 2023, promising admin controls, data privacy, and unlimited GPT-4 access—perfect for SME cybersecurity without the free-tier limits. I spun it up immediately for real client work: generating Intune policies, parsing M365 audit logs, and drafting Bash scripts for QNAP backups. No more hours tweaking regex—AI nailed 80% on first try.

    Early tests:
    – Converted manual PowerShell M365 mailbox audits to reusable functions
    – Automated DD-WRT router configs for client VPNs
    – Brainstormed cPanel/WHM hardening checklists

    Key Wins and Pitfalls

    Q3 Milestones:
    September: First AI-generated Intune deployment script—deployed live, zero errors. Saved 4 hours per client.
    October: Ollama early access teased local runs, but cloud GPT-4 crushed complex queries.
    November: GitHub’s generative AI repos tripled to 65k+, inspiring my first LocalLLM-Router sketches.

    Experiment Time Saved Issues Found
    M365 Audits 4h/client Overly verbose outputs
    Intune Policies 2 days/project Needed fact-checking
    Backup Scripts 3h/setup Hallucinated syntax (fixed iteratively)

    Pitfalls taught resilience: AI excelled at boilerplate but flopped on edge cases—my cybersecurity instincts always double-checked outputs.

    Lessons from the Frontlines

    1. Start small: Use AI for scripting grunt work, not strategy.
    2. Local potential: Ollama’s October buzz hinted at cost escapes from cloud tokens.
    3. Governance early: Even then, I logged prompts/outputs for audit trails—foreshadowing SentinelForge.

    ChatGPT Enterprise wasn’t a replacement; it amplified my expertise, prepping 2024’s Control Tower orchestration.

    Ready for AI-secured IT? Contact me for M365 audits or homelab setups.

    Next: GitHub AI Boom and My Homelab Shift (Nov 2023).