Blog

  • GitHub AI Boom: 65k+ Projects Spark My Homelab

    GitHub AI Boom: My Homelab Goes Local
    Published: November 20, 2023 (retrospective)

    November 2023’s generative AI explosion on GitHub—repos tripled to 65k+—pushed me from cloud experimentation to Proxmox homelab builds. ChatGPT Enterprise handled scripting, but token costs and privacy concerns demanded local LLMs. Enter Ollama’s early hype.

    From Cloud to Containers

    GitHub’s AI project surge validated my pivot: Ollama promised free, private inference on Proxmox VMs. First homelab setup:
    – Llama2 on RTX 3090 GPU passthrough
    – QNAP NFS for model storage
    – Uptime Kuma monitoring from day one

    Test Latency Cost Verdict
    GPT-4 (cloud) 2.1s $0.03/1k tokens Reference
    Ollama Llama2 1.8s $0 Winner
    Local Router (planned) 1.2s $0 Future

    First month: 87% cost savings routing simple queries locally. Cloud reserved for complex reasoning tasks only.

    Early Router Sketches

    The LocalLLM-Router concept was born here: route 70% of queries to Ollama, reserve cloud API for edge cases. This became the foundation of Control Tower’s cost engine a year later.

    Lessons

    1. Proxmox GPU passthrough is powerful but finicky—document everything.
    2. Model size matters: 7B handles most IT tasks; 13B for nuanced cybersecurity analysis.
    3. Privacy wins: Sensitive client data never leaves the homelab.

    Want to build your own AI homelab? Let’s talk.

    Next: M365 Copilot GA forces enterprise governance thinking (Jan 2024).

  • ChatGPT Enterprise: My First Steps into AI-Assisted IT

    ChatGPT Enterprise: My First Steps into AI-Assisted IT
    Published: September 25, 2023 (retrospective)

    2023 marked my pivot from 25+ years of pure IT/cybersecurity scripting to blending AI into daily workflows—starting with OpenAI’s ChatGPT Enterprise launch in late August. As a fractional IT Director managing M365 environments and Proxmox homelabs, I was sceptical: could AI handle PowerShell automation without hallucinating disasters? This post recaps those early experiments, wins, and the spark that ignited my AI journey.

    The Catalyst: Enterprise AI Goes Live

    ChatGPT Enterprise dropped on August 28, 2023, promising admin controls, data privacy, and unlimited GPT-4 access—perfect for SME cybersecurity without the free-tier limits. I spun it up immediately for real client work: generating Intune policies, parsing M365 audit logs, and drafting Bash scripts for QNAP backups. No more hours tweaking regex—AI nailed 80% on first try.

    Early tests:
    – Converted manual PowerShell M365 mailbox audits to reusable functions
    – Automated DD-WRT router configs for client VPNs
    – Brainstormed cPanel/WHM hardening checklists

    Key Wins and Pitfalls

    Q3 Milestones:
    September: First AI-generated Intune deployment script—deployed live, zero errors. Saved 4 hours per client.
    October: Ollama early access teased local runs, but cloud GPT-4 crushed complex queries.
    November: GitHub’s generative AI repos tripled to 65k+, inspiring my first LocalLLM-Router sketches.

    Experiment Time Saved Issues Found
    M365 Audits 4h/client Overly verbose outputs
    Intune Policies 2 days/project Needed fact-checking
    Backup Scripts 3h/setup Hallucinated syntax (fixed iteratively)

    Pitfalls taught resilience: AI excelled at boilerplate but flopped on edge cases—my cybersecurity instincts always double-checked outputs.

    Lessons from the Frontlines

    1. Start small: Use AI for scripting grunt work, not strategy.
    2. Local potential: Ollama’s October buzz hinted at cost escapes from cloud tokens.
    3. Governance early: Even then, I logged prompts/outputs for audit trails—foreshadowing SentinelForge.

    ChatGPT Enterprise wasn’t a replacement; it amplified my expertise, prepping 2024’s Control Tower orchestration.

    Ready for AI-secured IT? Contact me for M365 audits or homelab setups.

    Next: GitHub AI Boom and My Homelab Shift (Nov 2023).