Tag: GitHub

  • HeliOS-Studio: AI Startup Studio Ignites

    HeliOS-Studio: When the Infrastructure Becomes the Product
    Published: February 15, 2026 (retrospective)

    Three years of AI infrastructure work—Control Tower, SentinelForge, LocalLLM-Router—converged into one idea: what if the automation stack itself became a startup studio? HeliOS-Studio is the answer. GitHub-orchestrated, Ollama-powered, CrewAI-driven. It ships products, not just prototypes.

    The Studio Model

    HeliOS-Studio
    ├── control-tower     (workflow orchestration)
    ├── sentinelforge     (secure agent execution)
    ├── llm-router        (zero-cost inference)
    └── blog-agent        (content automation)
    

    First product out of the studio: quickstart-smb-ai — an AI readiness toolkit for UK SMEs. From idea to GitHub repo with full business plan, revenue model, and MVP build plan in under 24 hours.

    What the Studio Produces

    Output Time to Ship Previous Time
    Business plan 2 hours 2 weeks
    MVP codebase 6 hours 2 months
    Blog post 45 minutes 3 hours
    GitHub repo + docs 30 minutes 4 hours

    The Philosophy

    1. Infrastructure first: build the factory before the product.
    2. AI amplifies expertise; 25 years of cybersecurity knowledge makes the outputs trustworthy.
    3. Open source where possible—the community improves what you start.

    Interested in AI-accelerated product development? Let’s build something.

    Next: 3-year journey retrospective (Mar 2026).

  • Control Tower Blueprint: Orchestrating Multi-AI Chaos

    Control Tower Blueprint: From AI Chaos to Factory
    Published: July 20, 2024 (retrospective)

    By mid-2024 I was juggling Claude, Copilot, Perplexity, and local Ollama instances simultaneously. Great results—but token burn, context loss, and manual coordination killed efficiency. Control Tower v1.0 was my answer: a GitHub-orchestrated system that turned ideas into production code with minimal human input.

    The Workflow

    Idea → GitHub Issue → Claude researches
         → Copilot codes → I approve PR → deployed
    

    Key design decisions:
    Priority + budget fields on every issue halt overspend automatically
    Human gate on every PR—AI proposes, I approve
    Nightly decision cycle—agents run overnight, I review at breakfast

    The Numbers

    Metric Before Control Tower After Gain
    Code hours/week 20h 4h 80% reduction
    Token cost/week £120 £12 90% reduction
    Projects shipped/month 1 4 4x
    GitHub commits/month 45 200+ 4.4x

    What Made It Work

    1. Local-first routing: Proxmox + Ollama handled 70% of queries free.
    2. Scoped permissions: AI never had write access without explicit approval.
    3. Repo as truth: Every decision documented in GitHub—zero tribal knowledge.

    Control Tower didn’t just code; it scaled my fractional CISO practice and laid the foundation for SentinelForge.

    Want to automate your AI workflows? Book a Secure AI QuickScan.

    Next: CrewAI launch transforms agent security (Oct 2024).

  • GitHub AI Boom: 65k+ Projects Spark My Homelab

    GitHub AI Boom: My Homelab Goes Local
    Published: November 20, 2023 (retrospective)

    November 2023’s generative AI explosion on GitHub—repos tripled to 65k+—pushed me from cloud experimentation to Proxmox homelab builds. ChatGPT Enterprise handled scripting, but token costs and privacy concerns demanded local LLMs. Enter Ollama’s early hype.

    From Cloud to Containers

    GitHub’s AI project surge validated my pivot: Ollama promised free, private inference on Proxmox VMs. First homelab setup:
    – Llama2 on RTX 3090 GPU passthrough
    – QNAP NFS for model storage
    – Uptime Kuma monitoring from day one

    Test Latency Cost Verdict
    GPT-4 (cloud) 2.1s $0.03/1k tokens Reference
    Ollama Llama2 1.8s $0 Winner
    Local Router (planned) 1.2s $0 Future

    First month: 87% cost savings routing simple queries locally. Cloud reserved for complex reasoning tasks only.

    Early Router Sketches

    The LocalLLM-Router concept was born here: route 70% of queries to Ollama, reserve cloud API for edge cases. This became the foundation of Control Tower’s cost engine a year later.

    Lessons

    1. Proxmox GPU passthrough is powerful but finicky—document everything.
    2. Model size matters: 7B handles most IT tasks; 13B for nuanced cybersecurity analysis.
    3. Privacy wins: Sensitive client data never leaves the homelab.

    Want to build your own AI homelab? Let’s talk.

    Next: M365 Copilot GA forces enterprise governance thinking (Jan 2024).