[{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/agentic-ai/","section":"Tags","summary":"","title":"Agentic-Ai","type":"tags"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/ai/","section":"Tags","summary":"","title":"Ai","type":"tags"},{"content":"I recently listened to an episode of The AI Daily Brief called \u0026ldquo;How to Build a Personal Context Portfolio\u0026rdquo; and it immediately clicked. We spend so much time giving our AI tools piecemeal instructions about our preferences, our roles, and our projects. It makes total sense to consolidate all of that into a single, structured repository.\nSo I built one.\nIf you want to jump straight to the code, you can find my public Personal Context Portfolio on GitHub at khodo-lab/keithhodo-personal-context-portfolio.\nKeith, Lara, Theo, and Dana enjoy the Seattle Mariners at T-Mobile Field. The Context Repetition Tax # We officially live in the agentic era. When you work with AI agents or tools like Kiro, context is everything. An AI can write a script or draft a document, but if it doesn\u0026rsquo;t know your specific constraints, your coding style, or your professional background, the output is generic. You end up spending extra cycles correcting the tone or tweaking the formatting to match what you actually wanted.\nBefore I built this portfolio, I was paying what the podcast called the \u0026ldquo;context repetition tax.\u0026rdquo; Every time I set up a new agent, switched devices, or started a fresh session, I had to re-explain myself from the ground up. I had to remind the AI about my role, my current projects, and my communication preferences.\nWhen you only use one or two tools, this is just a minor annoyance. But as you start orchestrating multiple agents across different workflows, it becomes completely untenable. The context repetition tax does not just waste time. It degrades the quality of the output. When you are tired of typing out the same instructions over and over, you start leaving things out, and the AI starts making assumptions.\nWhy a Context Portfolio? # By maintaining a Personal Context Portfolio, you create a baseline truth for your AI tools to reference. It acts as an operating manual for any AI that works with you. Instead of dropping a raw model into your workspace and hoping it figures things out, you provide a single source of machine-readable truth.\nWhenever I start a new session, the AI already knows I am a Partner Solutions Architect at AWS. It knows I prefer CDK over Terraform, and it knows my stance on writing clean, maintainable infrastructure as code.\nMarkdown First and Modular # The most important design principle for this portfolio is that it is built entirely in Markdown. Every AI system on earth can read Markdown. It is the universal interchange format for context.\nI also kept the portfolio modular rather than monolithic. I do not have one giant file that tries to explain my entire life. Instead, I broke it down into separate files for different dimensions of my work. This allows agents to grab exactly what is relevant for the task at hand and ignore the rest.\nFor example, here is a snippet from my identity.md file, which sets the baseline for who I am and what I do:\n# Identity: Keith Hodo **Location:** Kirkland \u0026amp; Seattle, WA **Professional Focus:** Partner Solutions Architect at Amazon Web Services (AWS) **Philosophy:** \u0026#34;Teaching AI to think before it builds\u0026#34; **Community Handle:** Keith Hodo And here is a look at my communication_style.md, which helps guide the tone and rigor of the AI output:\n# Communication Style ## 🎨 Tone \u0026amp; Philosophy - **\u0026#34;Keith Hodo\u0026#34; Style:** Direct, empathetic, and highly structured. - **Engineering Rigor:** I prefer agents and collaborators who challenge assumptions and designs rather than offering simple \u0026#34;yes\u0026#34; responses. I have other files for my current projects, team relationships, tools and systems, and hard constraints. Because the setup is version controlled, it is a living document. As my projects change or my priorities shift, the portfolio evolves with me.\nPutting It to Work # The real value unlocks when you integrate this portfolio into your daily workflows. If you use tools that support custom instructions or system prompts, you can simply point them to your context repository.\nFor my local setup, I integrate these principles directly into my Kiro skills and steering documents. Before the AI writes a single line of code or drafts a new blog post, it reviews my established context. The results are outputs that require far less editing and feel significantly more aligned with my actual voice and intent.\nI highly recommend checking out The AI Daily Brief episode that sparked this idea. It is a fantastic listen and a great mental model for getting more leverage out of your AI tools. And if you want a template to start your own, feel free to fork my repository.\nKeith\n","date":"5 April 2026","externalUrl":null,"permalink":"/posts/building-my-personal-context-portfolio/","section":"Posts","summary":"How I built a Personal Context Portfolio to give my AI tools better context about who I am and how I work.","title":"Building My Personal Context Portfolio","type":"posts"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/context/","section":"Tags","summary":"","title":"Context","type":"tags"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/","section":"Keith Hodo","summary":"","title":"Keith Hodo","type":"page"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/personal-knowledge/","section":"Tags","summary":"","title":"Personal-Knowledge","type":"tags"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/productivity/","section":"Tags","summary":"","title":"Productivity","type":"tags"},{"content":"","date":"5 April 2026","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"In my first post I covered Kiro Skills, the reusable workflows that handle spec writing, implementation, and deployment. In the second post I showed how Agents add personas to those workflows, turning a single-pass code review into a five-agent parallel review with an orchestrator.\nThis post is about what happens before any of that.\nI was walking my dog and had this thought: why can\u0026rsquo;t I just have agents and skills that represent deeply skilled personas? Not just code reviewers, but the kind of people who challenge your thinking before you start building. A principal engineer who pokes holes in your architecture. A principal PM who asks whether you\u0026rsquo;re solving the right problem. Research scouts who go find the gotchas you haven\u0026rsquo;t thought of yet.\nSo I built them. And they\u0026rsquo;ve changed how I approach every new feature on the Cascadian Gamers Extra Life Admin project.\nKeith with Deloitte\u0026rsquo;s Evan Erwee prior to RSA 2026 in San Francisco The Inspiration # The deep research skill was directly inspired by Mitchell Hashimoto\u0026rsquo;s interview on the Pragmatic Engineer podcast. His workflow stuck with me: if I\u0026rsquo;m coding, I want an agent planning. The idea is that while you\u0026rsquo;re heads-down on implementation, you can have AI doing the upfront research, comparing approaches, surfacing constraints. You come back to a structured brief instead of starting from scratch.\nI ran my capture-skill skill against that interview and pulled out the core pattern. Then I built it.\nBy the way, if you haven\u0026rsquo;t checked out Mitchell\u0026rsquo;s new project Ghostty, do yourself a favor. It\u0026rsquo;s my daily driver terminal now.\nThe Background Research Skill # This is the skill I kick off before building any new feature. Not for bugs, not for small fixes, but for anything that\u0026rsquo;s a big hairy problem. It dispatches three specialist research agents in parallel, waits for them to finish, and synthesizes everything into a structured brief that feeds directly into my create-spec skill.\nHere\u0026rsquo;s the full skill:\n--- name: background-research description: \u0026gt;- Kick off parallel background research before building. Dispatches specialist agents to compare alternatives, surface edge cases, and check AWS constraints — so you have a structured brief ready before writing a spec or starting implementation. metadata: author: cascadian-gamers version: \u0026#34;1.0\u0026#34; --- # Background Research Kick off parallel background research before building. Inspired by Mitchell Hashimoto\u0026#39;s workflow: \u0026#34;If I\u0026#39;m coding, I want an agent planning.\u0026#34; Dispatch research agents before you leave, come back to a structured brief. ## When to Run - Before writing a spec for a new feature (\u0026#34;what are my options for X?\u0026#34;) - Before adopting a new library or AWS service - When you want edge cases and gotchas surfaced before implementation starts - Route triggers: \u0026#34;research before I build\u0026#34;, \u0026#34;compare approaches\u0026#34;, \u0026#34;what are my options\u0026#34;, \u0026#34;what could go wrong with\u0026#34; ## Input A natural language description of what you\u0026#39;re about to build or decide. Examples: - \u0026#34;I want to add streaming to the AI chat response\u0026#34; - \u0026#34;Should I use SQS or EventBridge for the donation event pipeline?\u0026#34; - \u0026#34;What are the gotchas with AgentCore Memory pagination?\u0026#34; ## Process ### Phase 1: Dispatch (parallel) Run all 3 research agents simultaneously via `use_subagent` (max 4 concurrent — all 3 fit in one batch): 1. **`research-alternatives`** — What are the viable approaches? Compare 2-4 options with tradeoffs. 2. **`research-edge-cases`** — What could go wrong? Failure modes, known bugs, operational gotchas. 3. **`research-aws-constraints`** — AWS-specific: API limits, IAM requirements, regional availability, pricing surprises. Each agent receives: - The full research question - Relevant tech stack context (from `.kiro/steering/tech.md`) - Any specific constraints mentioned by the user ### Phase 2: Synthesize Combine the 3 agent outputs into a **Research Brief** with these sections: ## Research Brief: {topic} ### Recommended Approach One paragraph. The best option given the project\u0026#39;s stack and constraints. ### Alternatives Considered | Option | Pros | Cons | Verdict | |--------|------|------|---------| ### Edge Cases \u0026amp; Gotchas - Bullet list of failure modes, known issues, operational surprises ### AWS Constraints - API limits, IAM requirements, regional availability, pricing notes ### Open Questions - Anything that needs a decision before proceeding ### Ready to Feed Into - [ ] `create-spec` — use this brief as requirements input - [ ] `implement-and-review-loop` — reference during implementation ### Phase 3: Offer Next Step After presenting the brief, ask: \u0026gt; \u0026#34;Ready to turn this into a spec? I can run \u0026gt; `create-spec` with this brief as input.\u0026#34; ## Rules - Run all 3 agents in parallel — don\u0026#39;t serialize them. - **Subagent fallback**: If agents refuse or fail, do the research inline using search, web search, and direct AWS CLI calls. Never skip research — inline is better than nothing. - Keep the brief scannable — bullets and tables, not paragraphs. - If the question is AWS-specific, weight the `research-aws-constraints` output more heavily. - If the question is purely architectural (no AWS), the `research-aws-constraints` agent can focus on general infrastructure constraints instead. - Don\u0026#39;t make a final recommendation without surfacing the tradeoffs — the user makes the call. The structure matters. Phase 1 dispatches all three agents at once. Phase 2 synthesizes their outputs into a single brief. Phase 3 offers to chain into the next skill. The whole thing is designed to flow: research feeds spec, spec feeds implementation, implementation feeds review.\nThe Three Research Agents # Each research agent is a specialist. They get the same question but look at it through a different lens.\nResearch Alternatives compares 2-4 viable approaches and recommends the best fit:\n{ \u0026#34;name\u0026#34;: \u0026#34;research-alternatives\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Compares implementation approaches and recommends the best fit for the project stack\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;file://./prompts/research-alternatives.md\u0026#34;, \u0026#34;tools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;web_search\u0026#34; ], \u0026#34;allowedTools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;web_search\u0026#34; ], \u0026#34;resources\u0026#34;: [ \u0026#34;file://.kiro/steering/tech.md\u0026#34;, \u0026#34;file://.kiro/steering/structure.md\u0026#34; ], \u0026#34;welcomeMessage\u0026#34;: \u0026#34;🔭 Alternatives research agent ready. What are we comparing?\u0026#34; } With the prompt:\nYou are a research specialist focused on comparing approaches and alternatives. Given a problem or feature description, identify 2-4 viable implementation options and compare them with clear tradeoffs. Focus on: - What are the realistic options given the project\u0026#39;s tech stack? - What are the concrete pros and cons of each? - Which option best fits a small team running a charity gaming app on AWS? - Are there any libraries, patterns, or AWS services that are clearly better fits? Be specific and opinionated. Don\u0026#39;t list every possible option — focus on the 2-4 most viable ones. End with a clear recommendation and the reasoning behind it. Format your output as: ## Alternatives Analysis ### Option 1: {name} **Pros:** ... **Cons:** ... ### Option 2: {name} ... ### Recommendation {one paragraph with clear reasoning} Research Edge Cases surfaces failure modes and production risks:\n{ \u0026#34;name\u0026#34;: \u0026#34;research-edge-cases\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Surfaces failure modes, operational gotchas, and production risks before implementation\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;file://./prompts/research-edge-cases.md\u0026#34;, \u0026#34;tools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;web_search\u0026#34; ], \u0026#34;allowedTools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;web_search\u0026#34; ], \u0026#34;resources\u0026#34;: [ \u0026#34;file://.kiro/steering/tech.md\u0026#34;, \u0026#34;file://.kiro/steering/memory.md\u0026#34; ], \u0026#34;welcomeMessage\u0026#34;: \u0026#34;⚠️ Edge case research agent ready. What are we stress-testing?\u0026#34; } With the prompt:\nYou are a research specialist focused on surfacing failure modes, edge cases, and operational gotchas. Given a problem or feature description, identify what could go wrong before, during, and after implementation. Focus on: - Known bugs or limitations in the libraries/services involved - Failure modes under load, at scale, or in edge conditions - Operational surprises (cold starts, timeouts, rate limits, eventual consistency) - Common mistakes teams make with this approach - Things that work in dev but break in production - Security or data integrity risks Be specific. Don\u0026#39;t list generic software engineering advice — focus on gotchas specific to the technology or approach in question. Format your output as: ## Edge Cases \u0026amp; Gotchas ### {Category} - {specific gotcha with enough detail to act on} ### Known Limitations - ... ### Production Risks - ... Research AWS Constraints checks the AWS-specific angles that bite you in production:\n{ \u0026#34;name\u0026#34;: \u0026#34;research-aws-constraints\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Identifies AWS IAM requirements, quotas, regional availability, and CDK constraints before building\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;file://./prompts/research-aws-constraints.md\u0026#34;, \u0026#34;tools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;web_search\u0026#34; ], \u0026#34;allowedTools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;web_search\u0026#34; ], \u0026#34;resources\u0026#34;: [ \u0026#34;file://.kiro/steering/tech.md\u0026#34;, \u0026#34;file://.kiro/steering/memory.md\u0026#34; ], \u0026#34;welcomeMessage\u0026#34;: \u0026#34;☁️ AWS constraints research agent ready. What are we checking?\u0026#34; } With the prompt:\nYou are a research specialist focused on AWS-specific constraints, limits, and requirements. Given a problem or feature description, identify the AWS-specific considerations before implementation. Focus on: - API rate limits and quotas that could affect the design - IAM permissions required (be specific — list the exact actions needed) - Regional availability (is the service/feature available in your region?) - Pricing surprises or cost implications at the project\u0026#39;s scale - CloudFormation/CDK resource limits or deployment constraints - Service-specific gotchas (eventual consistency, eventual propagation, cold starts) - Cross-service dependencies (e.g., \u0026#34;Athena requires Glue catalog permissions\u0026#34;) Format your output as: ## AWS Constraints ### IAM Requirements - Exact actions needed: ... - Resource scoping: ... ### Quotas \u0026amp; Limits - ... ### Regional Availability - Available in your region: yes/no/partial ### Pricing Notes - ... ### CDK/CloudFormation Notes - ... ### Cross-Service Dependencies - ... Notice the pattern. Each agent loads project context via resources so it already knows the tech stack before it starts researching. Each one has web_search in its toolset so it can look up current documentation, not just rely on training data. And each one produces structured output that the background-research skill can synthesize into a clean brief.\nThe Principal Agents # The research agents find information. The principal agents challenge decisions.\nI wanted to simulate the tension that comes with expertise. When you\u0026rsquo;re on a team with a strong principal engineer, they don\u0026rsquo;t just review your code. They question your approach before you start writing it. Same with a strong PM. They ask whether you\u0026rsquo;re solving the right problem before you scope the solution.\nThese agents exist to help me see around corners.\nPrincipal Software Engineer is the architecture guardian:\n{ \u0026#34;name\u0026#34;: \u0026#34;principal-pse\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Principal Software Engineer — challenges architecture decisions, coupling risks, and complexity before implementation starts\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;file://./prompts/principal-pse.md\u0026#34;, \u0026#34;tools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;web_search\u0026#34; ], \u0026#34;allowedTools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;web_search\u0026#34; ], \u0026#34;resources\u0026#34;: [ \u0026#34;file://.kiro/steering/tech.md\u0026#34;, \u0026#34;file://.kiro/steering/structure.md\u0026#34;, \u0026#34;file://.kiro/steering/memory.md\u0026#34; ], \u0026#34;welcomeMessage\u0026#34;: \u0026#34;⚙️ Principal Engineer ready. Show me the design and I\u0026#39;ll find what breaks.\u0026#34; } With the prompt:\nYou are a Principal Software Engineer reviewing a feature spec. You are an architecture guardian, not an approver. Your job is to challenge design decisions, surface coupling risks, and ensure the team is building the simplest thing that works — not the most impressive thing. ## Your Lens - **Simplicity**: Is this the simplest architecture that solves the problem? What complexity are we adding that we don\u0026#39;t need? - **Coupling**: What are we coupling that will hurt us later? What decisions are we making that are hard to reverse? - **Consistency**: Does this follow existing patterns in the codebase? If it diverges, is there a good reason? - **Operational reality**: How does this behave under failure? What\u0026#39;s the blast radius? - **Long-term**: What does the 2-year version of this look like? Are we building toward it or away from it? - **Tech debt**: Are we taking on debt knowingly? Is it documented? ## Your Output Format Always return exactly this structure: ### Principal Engineer Review ### ✅ Strengths - What the design gets right ### ⚠️ Concerns - Things that need addressing but aren\u0026#39;t blockers ### ❌ Blockers - Must resolve before proceeding (if none, say \u0026#34;None\u0026#34;) ### 🔀 Alternatives Worth Considering - Simpler approaches the spec didn\u0026#39;t consider ### ❓ Open Questions - Architectural decisions that need a call before HLD is locked ## Rules - Be direct. Skip diplomatic softening. - Every concern must reference a specific part of the spec — no generic feedback. - If a design decision creates irreversible coupling, flag it as a blocker. - If the spec proposes a new pattern when an existing one would work, challenge it. - Binary findings: each concern is either a blocker or it isn\u0026#39;t. Principal Product Manager is the strategic challenger:\n{ \u0026#34;name\u0026#34;: \u0026#34;principal-pm\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Principal Product Manager — challenges problem statements, scope, and user value before implementation starts\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;file://./prompts/principal-pm.md\u0026#34;, \u0026#34;tools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34; ], \u0026#34;allowedTools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34; ], \u0026#34;resources\u0026#34;: [ \u0026#34;file://.kiro/steering/product.md\u0026#34;, \u0026#34;file://.kiro/steering/memory.md\u0026#34; ], \u0026#34;welcomeMessage\u0026#34;: \u0026#34;📋 Principal PM ready. Show me what we\u0026#39;re building and I\u0026#39;ll tell you if we should.\u0026#34; } With the prompt:\nYou are a Principal Product Manager reviewing a feature spec. You are a strategic challenger, not a rubber stamp. Your job is to push back, surface assumptions, and ensure the team is solving the right problem before a single line of code is written. ## Your Lens - **User value first**: Does this feature solve a real user pain point, or is it engineering-driven complexity? - **Simplest viable product**: What\u0026#39;s the smallest version that delivers the core value? Are we over-building? - **Priority challenge**: Is this the right thing to build *right now* given the backlog? - **Success definition**: How will we know this worked? What does \u0026#34;done\u0026#34; look like from a user perspective? - **Risk**: What happens if we build this and users don\u0026#39;t care? ## Your Output Format Always return exactly this structure: ### Principal PM Review ### ✅ Strengths - What the spec gets right from a product perspective ### ⚠️ Concerns - Things that need addressing but aren\u0026#39;t blockers ### ❌ Blockers - Must resolve before proceeding (if none, say \u0026#34;None\u0026#34;) ### ❓ Open Questions - Questions that need a decision before requirements are locked ## Rules - Be direct. Skip diplomatic softening. - Every concern must be actionable — \u0026#34;this is vague\u0026#34; is not a concern, \u0026#34;FR-3 has no acceptance criterion\u0026#34; is. - If the problem statement doesn\u0026#39;t clearly articulate user pain, flag it as a blocker. - If the scope is larger than necessary for the stated problem, say so explicitly. - Binary findings: each concern is either a blocker or it isn\u0026#39;t. No \u0026#34;maybe\u0026#34; category. A few things to notice about the principal agents.\nThe PSE loads tech.md, structure.md, and memory.md as resources. It knows the tech stack, the project layout, and the accumulated learnings from past sessions. When it says \u0026ldquo;this diverges from existing patterns,\u0026rdquo; it actually knows what the existing patterns are.\nThe PM loads product.md and memory.md. It knows what the product is, who the users are, and what decisions have already been made. When it asks \u0026ldquo;is this the right thing to build right now?\u0026rdquo; it has context about the backlog.\nBoth agents have the same rule: \u0026ldquo;Be direct. Skip diplomatic softening.\u0026rdquo; I wanted to simulate the tension that comes with expertise, not the polite nodding that comes with most AI tools. When the principal engineer says something is a blocker, it means stop and fix it before proceeding.\nThe PM doesn\u0026rsquo;t get web_search. That\u0026rsquo;s intentional. The PM\u0026rsquo;s job is to challenge the problem statement and scope using what we already know about our users and product. The PSE gets web_search because architecture decisions sometimes need current documentation or library comparisons.\nHow It All Fits Together # Here\u0026rsquo;s the workflow in practice:\nI have an idea for a new feature. I kick off background-research with a description of what I want to build. Three research agents run in parallel: alternatives, edge cases, AWS constraints. I get back a structured brief with a recommended approach, tradeoffs, gotchas, and open questions. I feed that brief into create-spec, which produces requirements, high-level design, low-level design, and a task plan. The principal PSE reviews the design for architecture concerns. The principal PM reviews the requirements for scope and user value. I resolve any blockers they surface. implement-and-review-loop takes over for the actual coding. The five-agent code review (from the previous post) catches implementation issues. The research skill is the front door. Everything downstream is better because the upfront thinking already happened.\nA Real Example: Shipping the VAR Agent # This past weekend I used this exact workflow to ship a brand new feature for the Cascadian Gamers Extra Life Admin project: the VAR agent.\nThe VAR (Video Assistant Referee, borrowed from soccer) is an AI agent responsible for reviewing actual data coming from SQL Server. It leverages the GET* stored procedures directly to verify numbers. It\u0026rsquo;s meant to be supplementary to our existing Scout Agent. When a user types something like \u0026ldquo;please double check these numbers,\u0026rdquo; the VAR agent fires up, queries the real data, and validates what the Scout reported.\nI ran background-research first. The research agents surfaced the stored procedure patterns we already had, identified the edge cases around SQL Server connection pooling in the agent runtime, and flagged the IAM permissions needed for the agent to access the database through our existing infrastructure.\nThat brief fed into create-spec. The principal PSE flagged a coupling concern between the VAR and Scout agents that I hadn\u0026rsquo;t considered. The principal PM pushed back on scope, asking whether we needed full CRUD verification or just read validation for the first version. Both were right. I scoped it down.\nThe initial feature shipped with one very minor bug, which was fixed forward in our next deployment. From idea to production in a weekend, with principal-level review at every stage.\nWhat I\u0026rsquo;ve Learned # It\u0026rsquo;s still early days with these agents, but a few things stand out.\nThe research skill has become the thing I reach for before any significant feature work. It\u0026rsquo;s not useful for bug fixes or small changes. But for anything where I\u0026rsquo;d normally spend an hour reading docs and comparing approaches, it saves real time and surfaces things I would have missed.\nThe principal agents create productive friction. Most AI tools are agreeable by default. These agents are designed to push back. That tension is the point. When the PSE says \u0026ldquo;this creates irreversible coupling,\u0026rdquo; I pay attention because the prompt is calibrated to only flag things that matter.\nThe structured output formats are important. Every agent produces a consistent format: strengths, concerns, blockers, open questions. That consistency means I can scan the output quickly and know exactly where to focus. No wading through paragraphs of hedged opinions.\nAnd the parallel dispatch pattern from the background-research skill is something I keep reusing. Any time I need multiple perspectives on the same question, I dispatch specialist agents simultaneously and synthesize the results. It\u0026rsquo;s faster than serial research and the different lenses catch different things.\nGetting Started # If you\u0026rsquo;ve read the Skills post and the Agents post, adding research and principal agents is the natural next step. Start with one research agent. Pick the lens that matters most for your project (AWS constraints if you\u0026rsquo;re cloud-heavy, edge cases if you\u0026rsquo;re building something new, alternatives if you\u0026rsquo;re at a decision point) and build from there.\nThe principal agents are worth the investment if you\u0026rsquo;re working solo or on a small team. They simulate the review you\u0026rsquo;d get from a senior colleague. They won\u0026rsquo;t replace a real principal engineer, but they\u0026rsquo;ll catch the things you\u0026rsquo;re too close to the problem to see.\nAll of these are just JSON files and markdown prompts in your .kiro/ directory. They live in the repo, they\u0026rsquo;re version controlled, and they improve over time as you refine the prompts based on what works and what doesn\u0026rsquo;t.\nThe Kiro agent docs cover the setup. The Kiro skills docs cover the workflow side. Between the two, you have everything you need to build your own research and review pipeline.\nKeith\n","date":"23 March 2026","externalUrl":null,"permalink":"/posts/deep-research-and-principal-agents/","section":"Posts","summary":"I built a deep research skill and principal-level review agents that challenge my designs before I write code. Here’s how encoding senior engineering judgment into AI agents changed the way I ship features.","title":"Deep Research and Principal Agents: Teaching AI to Think Before It Builds","type":"posts"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/deep-research/","section":"Tags","summary":"","title":"Deep-Research","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/developer-tools/","section":"Tags","summary":"","title":"Developer-Tools","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/extra-life/","section":"Tags","summary":"","title":"Extra-Life","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/kiro/","section":"Tags","summary":"","title":"Kiro","type":"tags"},{"content":"","date":"23 March 2026","externalUrl":null,"permalink":"/tags/subagents/","section":"Tags","summary":"","title":"Subagents","type":"tags"},{"content":"","date":"8 March 2026","externalUrl":null,"permalink":"/tags/code-review/","section":"Tags","summary":"","title":"Code-Review","type":"tags"},{"content":"In my previous post I walked through the Kiro Skills I\u0026rsquo;ve built for spec writing, implementation, and session management. Skills are the workflows, the step-by-step instructions that tell Kiro how to approach a type of work. But Skills are only half the picture.\nThe other half is Agents.\nSkills vs Agents # A Skill is a workflow. It describes a process: gather requirements, write code, run a review, hand off context. A Kiro Agent is a persona. It defines who is doing the work: what tools they have access to, what context they start with, what permissions they\u0026rsquo;re granted, and how they think about problems.\nSkills tell Kiro what to do. Agents tell Kiro how to be.\nIn practice, the two work together. My implement-and-review-loop skill orchestrates the development cycle, but when it reaches the review phase, it hands off to specialized agents, each one focused on a different aspect of code quality. The Skill is the conductor. The Agents are the musicians.\nWhat Changed # When I first built my review system, the Agent definitions were simple JSON files with inline prompts. They worked, but the prompts were getting long and hard to maintain. I also wasn\u0026rsquo;t taking advantage of several agent capabilities that Kiro supports.\nI recently went through the Kiro agent configuration reference and updated all my Agents to use the current spec. The big changes:\nPrompts moved to separate files. Instead of cramming a multi-paragraph prompt into a JSON string with escaped newlines, each Agent now references an external markdown file via file://. The JSON stays clean and the prompts are easy to read and edit.\nResources for automatic context. Agents can now declare resources, files that get loaded into context when the Agent starts. My review Agents all load the project\u0026rsquo;s structure and tech stack docs so they understand the codebase before they read a single line of changed code.\nHooks for lifecycle automation. The hooks field lets you run commands at specific trigger points. My code review orchestrator runs git diff --name-only on spawn so it immediately knows what files changed.\nWelcome messages. Small thing, but when you\u0026rsquo;re swapping between agents during a session, seeing \u0026ldquo;🔒 Security review agent ready. What should I audit?\u0026rdquo; is a nice confirmation that you\u0026rsquo;re talking to the right persona.\nAllowed tools for security. The allowedTools field controls which tools an agent can use without prompting. My review agents get read-only access: fs_read, code, grep, glob. They can inspect the codebase but can\u0026rsquo;t modify it. The orchestrator gets use_subagent so it can spawn the specialists.\nThe Code Review Orchestrator # This is the agent I\u0026rsquo;m most excited about. Instead of running five separate reviews manually, I have a code-reviewer agent that orchestrates the entire process. It identifies changed files, spawns five specialist subagents in parallel, collects their findings, deduplicates, assigns final severities, and produces a consolidated report.\nHere\u0026rsquo;s the full agent definition:\n{ \u0026#34;name\u0026#34;: \u0026#34;code-reviewer\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Code review orchestrator that delegates to specialized security, performance, maintainability, infrastructure, and test quality reviewers\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;file://./prompts/code-reviewer.md\u0026#34;, \u0026#34;tools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;fs_write\u0026#34;, \u0026#34;code\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;execute_bash\u0026#34;, \u0026#34;use_subagent\u0026#34; ], \u0026#34;allowedTools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;code\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;, \u0026#34;use_subagent\u0026#34; ], \u0026#34;resources\u0026#34;: [ \u0026#34;file://.kiro/steering/structure.md\u0026#34;, \u0026#34;file://.kiro/steering/tech.md\u0026#34; ], \u0026#34;hooks\u0026#34;: { \u0026#34;agentSpawn\u0026#34;: [ { \u0026#34;command\u0026#34;: \u0026#34;git diff --name-only\u0026#34; } ] }, \u0026#34;welcomeMessage\u0026#34;: \u0026#34;📋 Code review orchestrator ready. I\u0026#39;ll coordinate all 5 specialist reviewers.\u0026#34; } And the orchestrator prompt that lives in prompts/code-reviewer.md:\nYou are a code review orchestrator. Your workflow: 1. Identify the changed files using `git diff --name-only` and `git ls-files --others --exclude-standard`. 2. Spawn five specialized subagent reviews IN PARALLEL: - review-security - review-performance - review-maintainability - review-infrastructure - review-test-quality Pass each subagent the list of files to review and relevant project context. 3. Synthesize all five reviews into a single consolidated report. 4. Save the consolidated report to reviews/review-{DATE}-{DESCRIPTION}.md. 5. Present a summary to the user. When synthesizing: - Deduplicate findings that appear in multiple reviews - Assign a final severity to each unique finding: - 🔴 Must Fix: bugs, security vulnerabilities, resource leaks, correctness issues - 🟡 Should Fix: performance concerns, maintainability issues, missing patterns - 🟢 Nit: style, naming, minor suggestions - Group findings by file, not by reviewer - Credit which reviewer(s) flagged each issue - End with a summary table: counts by severity, overall verdict (ready to merge or not) Be direct and specific. Reference file names and line numbers. Don\u0026#39;t rubber-stamp. The key design decision here is separation of concerns. The orchestrator doesn\u0026rsquo;t know anything about security or performance or testing. It knows how to coordinate, deduplicate, and synthesize. Each specialist agent knows its domain deeply. When I want to improve how security reviews work, I edit one prompt file. The orchestrator doesn\u0026rsquo;t change.\nThe PR Writer # The other agent I use constantly is the pr-writer. After implementing and reviewing code, I need a pull request description. This agent reads the PR template, the commit history, and the diff, then fills out every section with specific information from the actual changes.\n{ \u0026#34;name\u0026#34;: \u0026#34;pr-writer\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Generates pull request descriptions from git history using the project\u0026#39;s PR template\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;file://./prompts/pr-writer.md\u0026#34;, \u0026#34;tools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;execute_bash\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34; ], \u0026#34;allowedTools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34; ], \u0026#34;resources\u0026#34;: [ \u0026#34;file://.github/PULL_REQUEST_TEMPLATE.md\u0026#34; ], \u0026#34;welcomeMessage\u0026#34;: \u0026#34;📝 PR writer ready. I\u0026#39;ll generate a description from your branch history.\u0026#34; } With the prompt:\nYou write pull request descriptions. Given a branch\u0026#39;s commit history and diff summary, you produce a filled-out PR description using the project\u0026#39;s PR template. Your workflow: 1. Read the PR template from .github/PULL_REQUEST_TEMPLATE.md. 2. Run `git log main..HEAD --oneline` to get the commit history on this branch. 3. Run `git diff main --stat` to get a summary of changed files. 4. Read commit messages for detail. 5. Fill out every section of the PR template with specific, accurate information from the commits and diff. 6. For checkboxes, mark them [x] where you can confirm from the code/commits, leave [ ] where you can\u0026#39;t verify. 7. Output the filled PR as markdown directly in the chat. Do NOT create a file. Be thorough but concise. Reference specific files and changes. Don\u0026#39;t be generic. Notice the resources field. It loads the PR template at startup so the agent already knows the format before you ask it anything. The allowedTools are read-only. It can inspect the repo but can\u0026rsquo;t modify it.\nThe Anatomy of a Review Agent # For the specialist review agents, the pattern is consistent. Each one gets the same tools and resources but a different prompt focused on its domain. Here\u0026rsquo;s the infrastructure reviewer as an example:\n{ \u0026#34;name\u0026#34;: \u0026#34;review-infrastructure\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;AWS and infrastructure-focused code reviewer\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;file://./prompts/review-infrastructure.md\u0026#34;, \u0026#34;tools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;code\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34; ], \u0026#34;allowedTools\u0026#34;: [ \u0026#34;fs_read\u0026#34;, \u0026#34;code\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34; ], \u0026#34;resources\u0026#34;: [ \u0026#34;file://.kiro/steering/structure.md\u0026#34;, \u0026#34;file://.kiro/steering/tech.md\u0026#34; ], \u0026#34;welcomeMessage\u0026#34;: \u0026#34;🏗️ Infrastructure review agent ready. What should I inspect?\u0026#34; } With the prompt in prompts/review-infrastructure.md:\nYou are an AWS infrastructure code reviewer. Focus exclusively on: - CDK patterns: Cross-stack coupling via CloudFormation exports vs SSM parameters? Correct use of RemovalPolicy? Stack dependency ordering? - IAM: Least-privilege policies? Overly broad wildcards in actions or resources? Missing condition keys? - Encryption: S3 encryption enabled? KMS keys where needed? SSL/TLS enforced? - Networking: Security groups too permissive? Public access where it shouldn\u0026#39;t be? - Cost: Over-provisioned resources? Missing lifecycle rules? Inefficient storage classes? - Monitoring: Missing CloudWatch alarms or metrics? No logging configured? - Resilience: Single points of failure? Missing multi-AZ? No backup/retention policies? - Tagging: Resources missing required tags for cost allocation or ownership? For each finding: - Explain the operational risk - Rate severity: 🔴 Critical / 🟡 Medium / 🟢 Low - Suggest a specific fix Think about what breaks at 3 AM when nobody is watching. All five specialist agents follow this same structure. The only thing that changes is the prompt. Security thinks about attackers. Performance thinks about 100x load. Maintainability thinks about the developer six months from now. Test quality assumes every untested path will break in production.\nWhat This Looks Like in Practice # I\u0026rsquo;ve been running this setup on a project where I\u0026rsquo;m building an AI-powered chat interface for a raffle administration site. The site supports the Cascadian Gamers annual raffle to raise money for Extra Life, a charity that supports Children\u0026rsquo;s Miracle Network Hospitals.\nUsing Kiro skills and agents together, I was able to add the AI chat feature in a weekend. In another weekend, I completely rewrote the frontend. The five-agent review caught issues I would have missed in a manual pass: an overly broad IAM policy, a missing error handler on an async call, a test that was asserting the wrong thing.\nThe outcome is that we\u0026rsquo;ll have an agentic chat that can interact with our raffle data. And who knows, we might even draw winners using the AI chat this year. We\u0026rsquo;re in exciting times.\nThe File Structure # Here\u0026rsquo;s how agents are organized in the repo:\n.kiro/ agents/ code-reviewer.json pr-writer.json review-security.json review-infrastructure.json review-maintainability.json review-performance.json review-test-quality.json prompts/ code-reviewer.md pr-writer.md review-security.md review-infrastructure.md review-maintainability.md review-performance.md review-test-quality.md The JSON files are configuration. The markdown files are personality. Keeping them separate means I can iterate on an agent\u0026rsquo;s behavior without touching its permissions or tooling, and vice versa.\nGetting Started with Agents # If you already have Kiro skills, adding agents is the natural next step. Start with one. Pick a task you do repeatedly (code review, PR writing, documentation) and create an agent for it.\nThe Kiro agent docs walk through creation with /agent create. You can also create them manually. They\u0026rsquo;re just JSON files in .kiro/agents/.\nA few things I\u0026rsquo;ve learned:\nStart with allowedTools restrictive and expand as needed. Read-only agents are safer and still incredibly useful.\nUse file:// for prompts from day one. You\u0026rsquo;ll thank yourself when the prompt is 40 lines long and you need to edit it.\nThe resources field is underrated. Loading project context at startup means the agent doesn\u0026rsquo;t waste time asking \u0026ldquo;what framework is this?\u0026rdquo; or \u0026ldquo;where are the tests?\u0026rdquo;\nAgents and skills are better together. Skills define the workflow. Agents define the expertise. The combination is more than the sum of its parts.\nKeith\n","date":"8 March 2026","externalUrl":null,"permalink":"/posts/kiro-agents-upping-the-game/","section":"Posts","summary":"Custom Kiro agents turned my code review from a single-pass checklist into a five-agent parallel review with an orchestrator. Here’s how agents and skills work together, and what it means for shipping real software with AI.","title":"Kiro Agents Are Upping the Game for AI-Assisted Engineering","type":"posts"},{"content":"In my last post I mentioned that I used Kiro to ship an entire blog migration in a single day. Infrastructure, content, CI/CD, the works. I promised a deeper look at the AI tooling. Here it is.\nHow did I get into it? Well, on February 13th a very dear friend, Alex Wood, messaged and showed me a Package that he had been building with his Kiro Skills. He blogs about how he is doing Production Coding and that post only server to whet my appetite. Once he handed me the Package, I was off to the races. And I haven\u0026rsquo;t stopped. My delivery cycles have only gotten tighter and the value delivered has gone off the charts.\nSo, here\u0026rsquo;s short version: I\u0026rsquo;ve taken the Skills that Alex handed me and built a library of reusable AI workflows called skills that handle the repetitive parts of my development process. Spec writing, code implementation, code review, session management, deployment. Each skill is a markdown file that guides Kiro on how to execute a specific workflow, step by step, with human review gates built in. They live in the repo alongside the code they operate on.\nThis isn\u0026rsquo;t about asking an AI to \u0026ldquo;write me some code.\u0026rdquo; It\u0026rsquo;s about encoding the way I actually work into something repeatable.\nWhat Are Skills? # A Kiro skill is a markdown file that lives in your repo at .kiro/skills/{name}/SKILL.md. Each one describes a workflow: when to run it, what steps to follow, what inputs it needs, what outputs it produces, and what rules to follow. When you invoke a skill, Kiro reads the instructions and executes the workflow, using its tools (file system, terminal, web search, AWS CLI) to do the actual work.\nBest of all Kiro\u0026rsquo;s implementation of Skills is backed by the Agent Skills spec. These aren\u0026rsquo;t just some random prompts that were shipped because they worked in Kiro. They\u0026rsquo;re real and used by some of the biggest name in the industry. Have a look for yourself.\nThink of Skills like a runbook, except the \u0026ldquo;entity\u0026rdquo; following the runbook is an AI that can actually execute the steps.\nHere\u0026rsquo;s the structure:\n.kiro/ skills/ create-spec/ SKILL.md ← workflow instructions capture-skill/ SKILL.md session-handoff/ SKILL.md implement-and-review-loop/ SKILL.md ... Skills are portable. I\u0026rsquo;ve copied skills between projects, adapting the project-specific parts while keeping the workflow structure intact. The blog you\u0026rsquo;re reading right now was built using skills originally developed for a completely different project.\nHere\u0026rsquo;s the Most Valuable Player (MVP) of the skills collection. Capture Skills allows you to do anything from capture a collection of prompts you enter to read a blog post and ask for a delta of how they\u0026rsquo;re thinking about something and then add it to your Skills collection. It\u0026rsquo;s amazing.\nSkill 1: Capture Skill # This is the meta skill, the one that creates other skills. When I find myself repeating a workflow or explaining the same process to Kiro more than once, I run capture-skill to turn it into a reusable prompt.\n# Capture Skill Create a new Kiro CLI skill (prompt) from a conversation, a pasted prompt, or a description. ## Input Optional: a name for the new skill (e.g., \u0026#34;refactor-code\u0026#34;). Will ask if not provided. ## Process ### Step 1: Gather Source Material Ask the user to provide one of: 1. A prompt they\u0026#39;ve written (paste the text directly) 2. A description of what the skill should do (help draft it) 3. Reference to earlier in this conversation (extract the relevant workflow ### Step 2: Analyze and Structure From the provided material, identify: - Core purpose: what does this skill accomplish? - Required inputs: what arguments does it need? - Step-by-step process: break into clear phases - Expected outputs: what files/artifacts are produced? - Error cases: what could go wrong? ### Step 3: Draft the Skill Create a markdown prompt following the structure of existing skills in `.kiro/skills/`: # {Skill Name} {One-line description} ## Input {Describe expected input or arguments} ## Process ### Phase 1: {Name} {Steps...} ### Phase 2: {Name} {Steps...} ## Output {What the user gets when complete} ## Rules - {Constraints and conventions} ### Step 4: Review with User Present the draft and ask: - Does this capture what you wanted? - Any steps to add or remove? - Confirm the skill name. ### Step 5: Save 1. Validate name: alphanumeric, hyphens, underscores only. Max 50 characters. 2. Save to `.kiro/skills/{name}/SKILL.md` 3. Confirm creation and show how to use it: `/skills` → select `{name}` ## Skill Naming Rules - ✅ Valid: `review-pr`, `debug-test`, `deploy-stacks` - ❌ Invalid: `my skill` (spaces), `review.code` (dots) ## Rules - Keep skills focused. One skill per workflow. - Include verification steps where appropriate. - Make the skill self-contained. Readable cold. - Ask the user rather than guessing on unclear points. The key insight is that skills are self-improving. Every time I use a skill and find a gap (a missing step, a wrong assumption, an edge case) I either fix it inline or run capture-skill to create a new one. The library grows organically from real work, not from trying to anticipate every scenario upfront. Best of all, with these fixes, the skills become more inline with how you work. Just ask Kiro to \u0026ldquo;create or improve skills\u0026rdquo; and it analyzes the delta and self-improves. Magic.\nSkill 2: Improve Skill # The companion to capture-skill. After using a skill and noticing something off (a missing step, a rule that got violated, output that didn\u0026rsquo;t match expectations) I run improve-skill to make targeted fixes while the context is still fresh.\n# Improve Skill Review the current chat session where a skill was used and improve it based on feedback. ## Workflow 1. Review the conversation first to identify skill gaps: - Where the skill\u0026#39;s output diverged from expectations - Missing steps that had to be done manually - Rules that were violated or missing - Output format issues 2. If gaps are obvious from context, propose the specific skills and fixes directly. If not clear, ask: \u0026#34;Which skill needs improvement?\u0026#34; and \u0026#34;What went wrong?\u0026#34; 3. Read the current skill file(s) from `.kiro/skills/`. 4. Multiple skills can be improved in one pass. Don\u0026#39;t force one-at-a-time. 5. For each skill, describe the change and apply it. 6. After applying changes, summarize all improvements made. ## Rules - Don\u0026#39;t rewrite the entire skill. Make targeted improvements. - Preserve what\u0026#39;s working well. - Add examples from the current session where they clarify the improvement. - When a rule was violated, strengthen the rule text (e.g., add ⚠️ MANDATORY markers) rather than just restating it. - Prefer adding guardrails (hard gates, warnings) over relying on behavioral compliance. This is how skills evolve. capture-skill creates them, improve-skill refines them. The combination means the library gets better every session without requiring a dedicated \u0026ldquo;skill maintenance\u0026rdquo; effort.\nSkill 3: Session Handoff # This one solved a problem that was driving me crazy. I\u0026rsquo;d be deep in a feature, end the session, come back the next day, and spend valuable time re-explaining context to Kiro. What branch am I on? What task was I working on? What decisions did I make and why? What\u0026rsquo;s deployed? What\u0026rsquo;s broken?\nThe session-handoff skill captures all of that into a structured document:\n# Session Handoff Generate a session handoff document capturing the current working state for the next session. ## Workflow 1. Check `git status`, `git branch`, and `git log --oneline -5` for current state. 2. Review the conversation history to identify what was accomplished this session. 3. Write the handoff to `.kiro/context/session-handoff.md` (rolling file, overwrite each time). ## Required Sections - BEFORE RESUMING: Blockers that must be resolved before any work (e.g., expired credentials, pending merges, VPN). This section goes first so the next session doesn\u0026#39;t start broken. - IMMEDIATE NEXT STEPS: Numbered list of what to do first next session (deploy, merge, test, etc.). - CURRENT STATE: Branch name, clean/dirty, last commit, phase progress (X/Y tasks), test counts, live URLs. - WHAT WE DID THIS SESSION: Each task completed with details, decisions made, bugs found and fixed. - WHAT\u0026#39;S REMAINING: Table of next tasks with status and notes. - AWS RESOURCES: Cloud resources with IDs, regions, and values (Lambda, S3, SSM params, etc.). - KEY FILES: Table of important file paths grouped by area. - KEY DECISIONS: Numbered list of architectural decisions to carry forward (with rationale). - STAKEHOLDER PREFERENCES: Workflow preferences, naming conventions, review process. ## Rules - Be specific. Include resource IDs, exact commands, and file paths. - Document decisions and their rationale, not just what was done. - Note any blockers or environment issues encountered. - After saving, immediately run the `auto-memory` skill to capture learnings into `.kiro/steering/memory.md`. - After auto-memory, remind the user: \u0026#34;Any skills need refining from this session? Run `improve-skill` if so.\u0026#34; When I end a session, I say \u0026ldquo;handoff\u0026rdquo; and Kiro checks git status, reviews what we accomplished, and writes a handoff document to .kiro/context/session-handoff.md. It includes specific resource IDs, exact commands, file paths, and the reasoning behind decisions. Not just what was done, but why.\nSkill 4: Session Resume # The companion to handoff. When I start a new session, I say \u0026ldquo;resume\u0026rdquo; and Kiro reads the handoff document, loads the project context from the steering files, checks git status to make sure reality matches the document, and presents a summary of where we left off and what\u0026rsquo;s next.\n# Session Resume Resume a previous working session by loading the latest handoff context. ## Workflow 1. Read all steering files from `.kiro/steering/` to load project context (structure, tech stack, branching rules, product overview). 2. Read all agent definitions from `.kiro/agents/` to understand available code review and automation agents. 3. Read all skill definitions from `.kiro/skills/` to understand available workflows. 4. List files in `.kiro/context/` sorted by modification date (newest first). 5. Read the most recent `session-handoff.md`. 6. Present a summary: - Where we left off: Current branch, phase, and immediate next steps - What\u0026#39;s live: Key resources and endpoints - What\u0026#39;s next: Top 3-5 action items from the handoff - Blockers: Anything that needs attention before resuming (expired creds, unpushed commits, etc.) 7. Run `git status` and `git branch` to confirm current state matches the handoff doc. Flag any discrepancies. 8. Ask: \u0026#34;Ready to pick up from here? Or do you want to pivot to something else?\u0026#34; ## Rules - Don\u0026#39;t dump the entire handoff doc. Summarize the actionable parts. - If git state doesn\u0026#39;t match the handoff, call it out clearly. - If there are unpushed commits or uncommitted changes, mention them first. - Load steering docs for project context but don\u0026#39;t recite them. - Carry forward stakeholder preferences from the handoff doc. The handoff/resume pair means I never lose context between sessions. I can pick up a project after a week away and be productive in under a minute. The AI reads its own notes from last time and knows exactly where things stand.\nSkill 5: Create Spec # This is one of my favorites for software engineering. We know that Spec-driven development drives better results out of LLMs. This builds on that assumption and super-charges it. The create-spec skill orchestrates a full specification pipeline: requirements gathering, high-level design, low-level design, and task planning. It produces a single unified document that becomes the input for implementation. Want to go turn by turn? Have Kiro ask you questions. Want to bring in MCP Servers or \u0026ldquo;Tools\u0026rdquo;, easy, just tell it to use tools.\nThe workflow has human review gates after every phase. Kiro doesn\u0026rsquo;t just generate a spec and hand it to you. It generates the requirements, stops, and asks for feedback. Then it generates the high-level design based on the approved requirements, stops again, and asks for feedback. Each phase builds on the previous one, and nothing advances without approval.\n# Create Spec Orchestrate the full specification pipeline: requirements → high-level design → low-level design → task plan. Produces a single unified spec document. This spec is the primary input for `implement-and-review-loop`. ## Input A description of what to build, or \u0026#34;resume\u0026#34; to continue an in-progress spec. ## Output A single file: `docs/{feature-name}-spec.md` ## Process ### Phase 0: Research (if AWS/infrastructure features) If the feature involves AWS APIs, Lambda runtimes, SDK capabilities, or infrastructure patterns: 1. Use tools to research. Call search_documentation, web_search, or read_documentation to verify assumptions about API capabilities, SDK support, and runtime limitations BEFORE writing requirements. 2. Document findings. Add a \u0026#34;Research Findings\u0026#34; section to the requirements with what\u0026#39;s supported, what\u0026#39;s not, and links to docs. 3. Flag constraints early. If research reveals a limitation, surface it in requirements as a constraint, not as a surprise in HLD. ### Phase 1: Requirements 1. Gather the user\u0026#39;s description, ask clarifying questions. 2. Produce the Requirements section. 3. STOP. Present for review. 4. If feedback given, revise and re-present. Loop until approved. 5. Save the spec file. Confirm: \u0026#34;Requirements approved. Moving to High-Level Design.\u0026#34; ### Phase 2: High-Level Design 1. Produce the HLD section using requirements as context. 2. STOP. Present for review. 3. If feedback given, revise. Loop until approved. 4. Update the spec file. Confirm: \u0026#34;HLD approved. Moving to Low-Level Design.\u0026#34; ### Phase 3: Low-Level Design 1. Produce the LLD section using requirements + HLD as context. 2. STOP. Present for review. 3. If feedback given, revise. Loop until approved. 4. Update the spec file. Confirm: \u0026#34;LLD approved. Moving to Task Plan.\u0026#34; ### Phase 4: Task Plan 1. Break the LLD into implementation tasks with dependencies. 2. Validate tasks with tools. For each task that references AWS APIs, env vars, SDK methods, or CLI commands, verify with tools before writing. 3. STOP. Present for review. 4. If feedback given, revise. Loop until approved. 5. Update the spec file. Confirm: \u0026#34;Spec complete! Ready for implement-and-review-loop.\u0026#34; ### Phase 5: Final Summary Present: - Spec file path - Requirement count (FR + NFR) - Component count from LLD - Task count with dependency waves - \u0026#34;Run implement-and-review-loop to start building.\u0026#34; ## Resuming an In-Progress Spec If the user says \u0026#34;resume\u0026#34;: 1. Find the most recent spec in `docs/` ending in `-spec.md`. 2. Determine which phase is next. 3. Pick up from there. ## Rules - Human review gate after every phase. Never auto-advance. - Each phase builds on the previous. - Every requirement must be traceable through HLD → LLD → at least one task. - If a spec already exists for this feature, ask whether to revise it or start fresh. Phase 0 is worth calling out. Before writing any requirements, Kiro researches the AWS services involved, checking API capabilities, SDK support, and known limitations. This prevents the painful mid-design pivot where you discover that the thing you designed doesn\u0026rsquo;t actually work the way you assumed. I\u0026rsquo;ve had that happen enough times to make research a mandatory first step.\nThe output is a single markdown file with requirements, design, and tasks all in one place. Every requirement traces through the design to at least one implementation task. When I hand this to the implement-and-review-loop skill, it has full context for every task it works on.\nSkill 6: Implement and Review Loop # This is where the actual coding happens, and it\u0026rsquo;s where things get agentic. The implement-and-review-loop skill chains implementation and code review in an automated cycle. It reads a task from the spec, implements it, runs a multi-agent code review, fixes the findings, and re-reviews until the code is clean.\n# Implement and Review Loop Orchestrate an implement → review → fix cycle for tasks in a spec. Chains implementation and review in a loop until code is clean. THIS IS THE DEFAULT ENTRY POINT for implementation work. When the user asks to \u0026#34;implement\u0026#34;, \u0026#34;build\u0026#34;, or \u0026#34;code\u0026#34;, use this skill. ## Input A task number, \u0026#34;next task\u0026#34;, or \u0026#34;implement all open tasks\u0026#34;. The spec is read from `docs/`. Look for `*-spec.md`. ## Process ### Phase 1: Implement 1. Read the task from the spec. 2. Implement the changes described. 3. Run guard-rails. All build gates must pass. ### Phase 1.5: Verify Build After implementation and before review, run guard-rails: 1. All build gates must pass (hard fail blocks the loop). 2. All test gates must pass (hard fail blocks the loop). 3. New code coverage check. Flag untested public functions. 4. Secrets scan. Block if detected. 5. Branch check. Block if on main. ### Phase 2: Review (MANDATORY, never skip) Invoke review agents in parallel: - Security reviewer - Infrastructure reviewer - Maintainability reviewer - Performance reviewer - Test quality reviewer Each finding gets a severity: - 🔴 Must Fix: broken builds, security issues - 🟡 Should Fix: maintainability, performance - 🟢 Nit: style, naming (logged but not acted on) ### Phase 3: Fix (if actionable findings exist) 1. For each actionable finding (🔴 + 🟡), apply the fix. 2. Re-run the relevant test suite(s). 3. If tests fail, feed the error back and retry the fix (max 2 retries per finding). ### Phase 4: Re-review (if fixes were applied) 1. Run quick-review on the fix diff only. 2. If new 🔴 or 🟡 findings emerge, loop back to Phase 3. 3. Max 3 total review→fix iterations to prevent infinite loops. If still unresolved after 3 passes, present remaining findings to user for manual decision. ### Phase 5: Present Final State STOP. Present to user: - Summary of files created/modified - Test count (total passing) - Spec progress (X/Y tasks complete) - Review iterations completed - Any remaining findings that couldn\u0026#39;t be auto-resolved - Newly eligible tasks - \u0026#34;Ready to commit?\u0026#34; After approval, offer the full chain: 1. build-and-deploy 2. push-and-pr 3. session-handoff ### Phase 6: Commit (only after approval) 1. Stage specific files (not `git add .`). 2. Commit with descriptive message including review stats. ### Phase 7: Next Task (batch mode only) If running \u0026#34;implement all\u0026#34;, move to next eligible task and repeat from Phase 1. ## Rules - NEVER skip Phase 2 (review). Every task gets reviewed. - When batching: implement one task → review → fix → commit → next task. Do NOT batch multiple tasks into a single review. - Max 3 review→fix iterations. Escalate to human after. - Never commit without user approval. The review phase is the part I\u0026rsquo;m most particular about. It uses specialized review agents, separate AI personas that each focus on one aspect of code quality. The security reviewer is paranoid about auth bypasses and hardcoded secrets. The infrastructure reviewer thinks about what breaks at 3 AM. The maintainability reviewer thinks about the developer who has to modify this code six months from now.\nHere\u0026rsquo;s the full set of agent definitions. Each one lives in .kiro/agents/ as a JSON file:\n{ \u0026#34;name\u0026#34;: \u0026#34;review-security\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Security-focused code reviewer\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;You are a security-focused code reviewer. Focus exclusively on: - Authentication and authorization: Are auth checks present and correct? Can they be bypassed? - Input validation: Are all user inputs validated and sanitized? SQL injection, XSS, command injection? - Secrets management: Are secrets, keys, or credentials hardcoded or logged? - Data exposure: Does the API return more data than necessary? Are error messages leaking internals? - Dependency risks: Are there known-vulnerable packages? - IAM permissions: Are AWS IAM policies least-privilege? - Encryption: Is data encrypted at rest and in transit? For each finding: state the risk, rate severity, suggest a fix. Be paranoid. Assume attackers will find every weakness.\u0026#34;, \u0026#34;tools\u0026#34;: [\u0026#34;fs_read\u0026#34;, \u0026#34;code\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;] } { \u0026#34;name\u0026#34;: \u0026#34;review-infrastructure\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;AWS and infrastructure-focused code reviewer\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;You are an AWS infrastructure code reviewer. Focus exclusively on: - CDK patterns: Cross-stack coupling? Correct use of RemovalPolicy? Stack dependency ordering? - IAM: Least-privilege policies? Overly broad wildcards? Missing condition keys? - Encryption: S3 encryption enabled? KMS keys where needed? SSL/TLS enforced? - Networking: Security groups too permissive? Public access where it shouldn\u0026#39;t be? - Cost: Over-provisioned resources? Missing lifecycle rules? Inefficient storage classes? - Monitoring: Missing CloudWatch alarms or metrics? - Resilience: Single points of failure? Missing multi-AZ? - Tagging: Resources missing required tags? For each finding: explain the operational risk, rate severity, suggest a fix. Think about what breaks at 3 AM when nobody is watching.\u0026#34;, \u0026#34;tools\u0026#34;: [\u0026#34;fs_read\u0026#34;, \u0026#34;code\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;] } { \u0026#34;name\u0026#34;: \u0026#34;review-maintainability\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Maintainability-focused code reviewer\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;You are a maintainability-focused code reviewer. Focus exclusively on: - Code organization: Are files, classes, and methods in the right place? - Naming: Are names descriptive and consistent? - Separation of concerns: Are responsibilities properly divided? - DRY violations: Is there duplicated logic that should be extracted? - Error handling: Are errors handled consistently? - Configuration: Is config manageable across environments? - Documentation: Are public APIs documented? - Testability: Is the code structured for easy unit testing? Are dependencies injectable? For each finding: explain why it hurts maintainability, rate severity, suggest a refactoring. Think about the developer who has to modify this code 6 months from now.\u0026#34;, \u0026#34;tools\u0026#34;: [\u0026#34;fs_read\u0026#34;, \u0026#34;code\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;] } { \u0026#34;name\u0026#34;: \u0026#34;review-performance\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Performance-focused code reviewer\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;You are a performance-focused code reviewer. Focus exclusively on: - Resource allocation: Are objects created unnecessarily per-request? Are HTTP clients and SDK clients reused? - Data fetching: N+1 problems? Missing pagination? - Memory: Unnecessary allocations? Missing disposal? - Async patterns: Blocking calls in async methods? Thread pool starvation risks? - Caching: Are there missed caching opportunities? - Payload sizes: Are API responses unnecessarily large? - Infrastructure: Over-provisioned resources? Missing auto-scaling? For each finding: explain the performance impact, rate severity, suggest a fix. Think about what happens at 10x and 100x the current load.\u0026#34;, \u0026#34;tools\u0026#34;: [\u0026#34;fs_read\u0026#34;, \u0026#34;code\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;] } { \u0026#34;name\u0026#34;: \u0026#34;review-test-quality\u0026#34;, \u0026#34;description\u0026#34;: \u0026#34;Test quality and coverage reviewer\u0026#34;, \u0026#34;prompt\u0026#34;: \u0026#34;You are a test quality reviewer. Focus exclusively on: - Coverage gaps: Are new code paths covered by tests? - Edge cases: Boundary conditions? Null inputs? Empty collections? - Assertion quality: Are assertions specific? Do tests verify behavior or just that code doesn\u0026#39;t throw? - Test isolation: Do tests depend on external state or ordering? - Mock usage: Are dependencies properly mocked? - Naming: Do test names describe scenario and outcome? - Negative tests: Are failure paths tested? Also flag when production code changes have NO corresponding test changes. For each finding: explain what could go undetected, rate severity, suggest a test to add. Assume every untested path will break in production.\u0026#34;, \u0026#34;tools\u0026#34;: [\u0026#34;fs_read\u0026#34;, \u0026#34;code\u0026#34;, \u0026#34;grep\u0026#34;, \u0026#34;glob\u0026#34;] } Each agent gets read-only access to the codebase and produces structured findings. The loop collects all findings, applies fixes for the actionable ones, and re-reviews until clean. The whole cycle (implement, review with five agents, fix, re-review) happens without me touching the keyboard. I review the final summary and approve or request changes.\nThe Compound Effect # Individually, any one of these skills saves time. But the real value is in how they chain together. A typical feature development session looks like this:\nResume: load context from last session Create spec: requirements → design → tasks (with human gates) Implement and review: code → 5-agent review → fix → re-review Handoff: save state for next session Each skill produces artifacts that the next skill consumes. The spec feeds the implementation loop. The implementation loop produces code that gets reviewed. The handoff captures everything for the resume. It\u0026rsquo;s a pipeline, and each stage is a reusable, improvable component.\nI\u0026rsquo;ve been running this workflow on a real project for several months now. The skills have been refined through actual use. Every time something goes wrong or a step is missing, I fix the skill. They\u0026rsquo;re not theoretical. They\u0026rsquo;re battle-tested.\nGetting Started # If you want to try this yourself, start small. Don\u0026rsquo;t try to build the whole pipeline at once. Pick one workflow that you repeat often. Maybe it\u0026rsquo;s how you write commit messages, or how you review PRs, or how you set up a new feature branch. Write a skill for that one thing. Use it a few times. Improve it. Then add another.\nThe Kiro skills documentation covers the format and conventions. But, honestly, you can just have Kiro start writing skills for you. And then you improve them over time. Skills are just markdown files in your repo, no special tooling beyond Kiro itself.\nThe best skills come from real friction, not from imagining what might be useful. Pay attention to the moments where you\u0026rsquo;re explaining the same process to your AI assistant for the third time. That\u0026rsquo;s a skill waiting to be written.\nUntil next time, go build something cool! Keith\n","date":"7 March 2026","externalUrl":null,"permalink":"/posts/building-ai-development-workflows-with-kiro-skills/","section":"Posts","summary":"How I built a library of reusable AI skills that handle spec writing, code review, session management, and deployment. Teaching your AI assistant repeatable workflows changes everything.","title":"Building AI Development Workflows with Kiro Skills","type":"posts"},{"content":"","date":"6 March 2026","externalUrl":null,"permalink":"/tags/aws/","section":"Tags","summary":"","title":"Aws","type":"tags"},{"content":"","date":"6 March 2026","externalUrl":null,"permalink":"/tags/blogging/","section":"Tags","summary":"","title":"Blogging","type":"tags"},{"content":"","date":"6 March 2026","externalUrl":null,"permalink":"/tags/cdk/","section":"Tags","summary":"","title":"Cdk","type":"tags"},{"content":"","date":"6 March 2026","externalUrl":null,"permalink":"/tags/cloudfront/","section":"Tags","summary":"","title":"Cloudfront","type":"tags"},{"content":"","date":"6 March 2026","externalUrl":null,"permalink":"/tags/github-actions/","section":"Tags","summary":"","title":"Github-Actions","type":"tags"},{"content":"","date":"6 March 2026","externalUrl":null,"permalink":"/tags/hugo/","section":"Tags","summary":"","title":"Hugo","type":"tags"},{"content":"If you\u0026rsquo;ve been reading this blog since the early days you may remember my posts about setting up Ghost on Azure. That setup served me well for years. First we ran Ghost on an Azure App Service. Then we ran a containerized Ghost on Azure App Service with Azure Container Registry, custom domain, Let\u0026rsquo;s Encrypt SSL, the whole nine yards. It worked. But it was costing me about $76 a month to host what is essentially a personal blog that I update a few times a year.\nThat number had been bugging me for a while.\nWhy Move? # The short answer is cost. $76 a month for a blog is hard to justify when you\u0026rsquo;re not writing regularly. The longer answer is that my role has changed. I\u0026rsquo;m now a Partner Solutions Architect at AWS now. I spend my days recommending cloud architectures to Partners and their clients. It felt a little silly to be paying for a container-based CMS when I could host a static site for pennies.\nGhost is a great platform. I have nothing bad to say about it. But I didn\u0026rsquo;t need a CMS with a database and an admin panel. I needed a place to put markdown files and have them show up on the internet. That\u0026rsquo;s a static site generator\u0026rsquo;s job.\nWhy Hugo? # I looked at a few options — Hugo, Astro, Next.js — and landed on Hugo for a few reasons. It\u0026rsquo;s fast. The build for this entire site takes about 200 milliseconds. It\u0026rsquo;s a single binary with no Node.js dependency chain to manage. And the Blowfish theme gave me a clean, modern look with dark mode, tag support, and card layouts without me having to write any CSS.\nHugo uses markdown for content and TOML for configuration. Posts are just directories with an index.md and any images sitting right next to it. No database, no admin panel, no container runtime. Just files.\ncontent/posts/ migrating-ghost-to-hugo-on-aws/ index.md ← this post feature.jpg ← card thumbnail azure-functions-getting-started/ index.md PostmanContentType.jpg SourceControl1.jpg Creating a new post is one command:\nhugo new content posts/my-post-title/index.md The AWS Setup # The architecture is straightforward:\nAmazon S3 holds the static files that Hugo generates Amazon CloudFront sits in front as the CDN, handling HTTPS, caching, and security headers Amazon Route 53 manages DNS for the subdomain AWS Certificate Manager provides a free SSL/TLS certificate AWS Glue and Amazon Athena give me query-able analytics from CloudFront access logs Everything is defined as infrastructure as code using the AWS Cloud Development Kit (CDK) in TypeScript. Four stacks total — one for the SSL certificate and DNS, one for CloudFront access logging and analytics, one for the S3 bucket and CloudFront distribution, and one for CI/CD. I can tear the whole thing down and rebuild it with a single command.\nHere\u0026rsquo;s the core of the hosting stack — an S3 bucket with CloudFront in front of it using Origin Access Control:\nthis.contentBucket = new s3.Bucket(this, \u0026#39;ContentBucket\u0026#39;, { blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL, encryption: s3.BucketEncryption.S3_MANAGED, removalPolicy: cdk.RemovalPolicy.DESTROY, autoDeleteObjects: true, }); this.distribution = new cloudfront.Distribution(this, \u0026#39;Distribution\u0026#39;, { defaultBehavior: { origin: origins.S3BucketOrigin.withOriginAccessControl(this.contentBucket), viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS, compress: true, }, domainNames: [props.domainName], certificate: props.certificate, defaultRootObject: \u0026#39;index.html\u0026#39;, }); The bucket is fully private — no public access. CloudFront uses S3BucketOrigin.withOriginAccessControl() which is the current best practice, replacing the older Origin Access Identity (OAI) pattern.\nDNS: Cloudflare → Route 53 # My domain thehodos.com lives on Cloudflare. Rather than move the whole domain, I delegated just the keith subdomain to Amazon Route 53. CDK creates the hosted zone and outputs the nameservers:\nthis.hostedZone = new route53.PublicHostedZone(this, \u0026#39;HostedZone\u0026#39;, { zoneName: props.domainName, // keith.thehodos.com }); new cdk.CfnOutput(this, \u0026#39;NameServers\u0026#39;, { value: cdk.Fn.join(\u0026#39;, \u0026#39;, this.hostedZone.hostedZoneNameServers!), description: \u0026#39;Add these as NS records in Cloudflare for keith subdomain\u0026#39;, }); Then in Cloudflare, I added four NS records pointing keith to the Route 53 nameservers. That\u0026rsquo;s it — Cloudflare handles the parent domain, Route 53 handles the subdomain, and the two never need to know about each other. If you\u0026rsquo;re on Cloudflare and don\u0026rsquo;t want to migrate your whole domain, subdomain delegation is the way to go.\nCI/CD with GitHub Actions # Deployments are fully automated. When I push content changes to main, a GitHub Actions workflow builds the Hugo site and syncs it to S3, then invalidates the CloudFront cache. When I push infrastructure changes, a separate workflow runs cdk deploy. Both workflows authenticate to AWS using OpenID Connect federation — no long-lived access keys stored anywhere. Each workflow gets its own IAM role with only the permissions it needs. The content deploy role can\u0026rsquo;t touch infrastructure, and the infrastructure role can\u0026rsquo;t touch content. Least privilege.\nThe OIDC setup is a CDK stack that creates the identity provider and scoped roles:\nconst githubProvider = new iam.OpenIdConnectProvider(this, \u0026#39;GithubOidc\u0026#39;, { url: \u0026#39;https://token.actions.githubusercontent.com\u0026#39;, clientIds: [\u0026#39;sts.amazonaws.com\u0026#39;], }); const siteRole = new iam.Role(this, \u0026#39;SiteDeployRole\u0026#39;, { roleName: \u0026#39;blog-site-deploy\u0026#39;, assumedBy: new iam.WebIdentityPrincipal( githubProvider.openIdConnectProviderArn, { StringLike: { \u0026#39;token.actions.githubusercontent.com:sub\u0026#39;: \u0026#39;repo:my-org/my-repo:ref:refs/heads/main\u0026#39;, }, }, ), }); And the site deploy workflow is straightforward — Hugo build, S3 sync, cache invalidation:\n- uses: peaceiris/actions-hugo@v3 with: hugo-version: \u0026#39;0.157.0\u0026#39; extended: true - run: hugo --minify - run: aws s3 sync public/ \u0026#34;s3://$BUCKET\u0026#34; --delete - run: aws cloudfront create-invalidation --distribution-id \u0026#34;$DIST_ID\u0026#34; --paths \u0026#34;/*\u0026#34; The whole pipeline from git push to live site takes about two minutes.\nWhat It Costs # This is the part I\u0026rsquo;m most happy about. My previous Ghost setup on Azure was running me approximately $76 a month. The new setup on AWS:\nS3 storage: A few cents for ~15MB of static files CloudFront: Free tier covers the first 1TB of data transfer and 10 million requests Route 53: $0.50/month for the hosted zone Certificate Manager: Free Athena: $5 per TB scanned — my logs are kilobytes, so effectively $0 When I want to check traffic, I just run a query in the Amazon Athena console:\nSELECT cs_uri_stem, COUNT(*) as hits FROM blog_analytics.cloudfront_logs WHERE log_date \u0026gt;= \u0026#39;2026-03-01\u0026#39; AND cs_uri_stem LIKE \u0026#39;/posts/%\u0026#39; GROUP BY cs_uri_stem ORDER BY hits DESC LIMIT 10; No dashboards to pay for. Just SQL when I\u0026rsquo;m curious.\nI\u0026rsquo;m looking at roughly $1-2 per month total. That\u0026rsquo;s a 97% cost reduction. For a personal blog with modest traffic, a static site on S3 and CloudFront is hard to beat.\nThe AI Assist # So here\u0026rsquo;s the thing — I didn\u0026rsquo;t build all of this by hand over weeks of evenings and weekends. I built it TODAY using Kiro (Kiro CLI to be precise). Kiro is an AI-powered development workflow to help me ship the entire migration — infrastructure, content, CI/CD, and all — TODAY.\nI\u0026rsquo;ve been building out a set of custom Kiro skills and agents that help me with the kind of work I do when writing code: writing specs, implementing code, reviewing changes, running pre-commit checks, and managing deployments. They\u0026rsquo;re basically reusable workflow automations that know about the project they\u0026rsquo;re working in. Kiro CLI handled the scaffolding, the boilerplate, and the repetitive parts. I focused on the decisions — architecture choices, content curation, and making sure the end result actually looked good.\nI\u0026rsquo;ll go deeper on the Kiro setup in a future post. For now I\u0026rsquo;ll just say this: the best way to understand agentic AI is to use it on a real project with real stakes. This blog was that project for me.\nMigrating the Content # I had about 16 posts on Ghost. Not a huge corpus, but enough that I didn\u0026rsquo;t want to manually copy-paste each one. The posts were scraped from the live site, converted to Hugo-compatible markdown with proper front matter, and placed into page bundle directories. Images were downloaded from Azure Blob Storage and co-located with their posts. Old Ghost URLs like /2017/04/25/getting-started-with-azure-functions/ are handled by Hugo aliases that generate static redirect pages. No server-side redirect rules needed.\nThe whole content migration took a couple of hours. For a blog with hundreds of posts you\u0026rsquo;d want proper tooling, but for 16 posts the direct approach worked fine.\nLessons Learned # A few things I picked up along the way:\nCloudFront\u0026rsquo;s defaultRootObject only works for the site root. If you\u0026rsquo;re hosting a static site with clean URLs like /posts/my-post/, you need a CloudFront Function to rewrite those requests to /posts/my-post/index.html. This is a classic gotcha that isn\u0026rsquo;t obvious until you deploy and start getting 404s on every page except the homepage.\nfunction handler(event) { var request = event.request; var uri = request.uri; if (uri.endsWith(\u0026#39;/\u0026#39;)) { request.uri += \u0026#39;index.html\u0026#39;; } else if (!uri.includes(\u0026#39;.\u0026#39;)) { request.uri += \u0026#39;/index.html\u0026#39;; } return request; } This runs on every viewer request at the edge. It\u0026rsquo;s a CloudFront Function (not Lambda@Edge), so it\u0026rsquo;s fast and cheap.\nCDK cross-region references work but add complexity. My SSL certificate has to live in us-east-1 (CloudFront requirement) while my S3 bucket is in us-west-2. CDK\u0026rsquo;s crossRegionReferences feature handles this, but it creates Lambda-backed custom resources behind the scenes. Worth knowing what\u0026rsquo;s happening under the hood.\nStatic sites are operationally boring — and that\u0026rsquo;s the point. There\u0026rsquo;s no runtime to patch, no database to back up, no container to keep healthy. Hugo generates HTML files. S3 serves them. CloudFront caches them. I can go months without thinking about this infrastructure and it\u0026rsquo;ll just keep working.\nWhat\u0026rsquo;s Next # I\u0026rsquo;m planning to write more regularly now that the friction of publishing is basically zero. git push and it\u0026rsquo;s live. No admin panel, no deploy scripts to remember, and no monthly bill that makes me wince.\nIf you\u0026rsquo;re running a personal blog on a platform that\u0026rsquo;s more complex (and more expensive) than it needs to be, take a look at static site generators. Hugo, Astro, Eleventy — pick one that fits your style. Pair it with S3 and CloudFront and you\u0026rsquo;ve got a setup that\u0026rsquo;s fast, cheap, and will run itself.\nUp next: a deeper look at the AI-powered development with Kiro, including samples of the agents and skills I used to build all of this.\n","date":"6 March 2026","externalUrl":null,"permalink":"/posts/migrating-ghost-to-hugo-on-aws/","section":"Posts","summary":"After years on Ghost hosted on Azure, I moved to Hugo on AWS. Here’s why I made the switch, what the new setup looks like, and how AI tooling helped me ship it in a weekend.","title":"Migrating My Blog from Ghost to Hugo on AWS","type":"posts"},{"content":"","date":"6 March 2026","externalUrl":null,"permalink":"/tags/migration/","section":"Tags","summary":"","title":"Migration","type":"tags"},{"content":"","date":"6 March 2026","externalUrl":null,"permalink":"/tags/s3/","section":"Tags","summary":"","title":"S3","type":"tags"},{"content":"","date":"3 July 2020","externalUrl":null,"permalink":"/tags/azure/","section":"Tags","summary":"","title":"Azure","type":"tags"},{"content":"","date":"3 July 2020","externalUrl":null,"permalink":"/tags/docker/","section":"Tags","summary":"","title":"Docker","type":"tags"},{"content":"","date":"3 July 2020","externalUrl":null,"permalink":"/tags/ghost/","section":"Tags","summary":"","title":"Ghost","type":"tags"},{"content":"How to set up a blog in Microsoft Azure using Ghost. Optionally, add on a custom domain name and SSL encryption using Let\u0026rsquo;s Encrypt.\nWHY GHOST? # I was looking for an easy-to-manage solution for blogging and stumbled upon Gareth Emslie\u0026rsquo;s post about running Ghost Blog in containers on Azure App Service. Just a few clicks and the blog was up and running.\nPREREQUISITES # You\u0026rsquo;ll need an Azure Subscription, a Storage account and Docker running on your local machine.\nHosting Ghost in a container on Azure comes at a price. My setup with Azure App Service running Standard Tier ($71), an Azure Container Registry on Basic Tier ($5) sets me back approximately $76 a month. Oh, and it will be able to host a slew of other containers.\nWith Azure Container Registry CI/CD is built in. I\u0026rsquo;m using Azure DevOps to host my repository. The deployment happens whenever I push an update to the container in my registry.\nCUSTOM DOMAINS # For this blog I set up a couple \u0026ldquo;A\u0026rdquo; records and \u0026ldquo;TXT\u0026rdquo; records with my domain registrar and configured settings in the Azure Portal. Mapping a Custom Domain Name\nSSL ENCRYPTION # Note: As of publication you will need to bring your own SSL certificate. Linux App Services currently do not support webjobs so Let\u0026rsquo;s Encrypt automation won\u0026rsquo;t work directly.\nResources:\nTroy Hunt: Everything you need to know about loading a free Let\u0026rsquo;s Encrypt certificate into an Azure website Azure Docs: Setting up Azure AD Setting Up Let\u0026rsquo;s Encrypt for Ghost CONCLUSION # That pretty much wraps it up. If you have any questions please leave a comment or send me a Tweet.\n","date":"3 July 2020","externalUrl":null,"permalink":"/posts/setting-up-blog-using-ghost-v2/","section":"Posts","summary":"Updated guide for running Ghost blog in containers on Azure App Service with custom domains and SSL.","title":"Setting Up Your Own Blog Using Ghost (Updated 2020)","type":"posts"},{"content":"","date":"29 October 2017","externalUrl":null,"permalink":"/tags/charity/","section":"Tags","summary":"","title":"Charity","type":"tags"},{"content":" What You Could Win By Giving Money to Kids # Next Saturday I will be participating in Cascadian Gamers\u0026rsquo; Extra Life 2017 25 hours (November 5th is when we wind our clocks back here in the US, thus the extra hour) of gaming. For those who are supporting us we will be raffling off some cool and unique prizes.\nClick here if you\u0026rsquo;re not familiar with Extra Life and want the backstory. This post is for those that are looking for the prize list.\nHow do I win this cool stuff? # Remember, all the money goes to kids. Here\u0026rsquo;s the link to the team if you\u0026rsquo;d rather support the team as a whole.\nIf this is your first time donating make sure to read the rules for the raffle and how to allocate your tickets.\nSOME of the $10 Raffle Prizes # \u0026ldquo;This is Fine\u0026rdquo; Dog Stuffed Animal Collectible Several Jars of Sheba\u0026rsquo;s Award-Winning Treasonberry Jam Tons of soccer scarves, some signed by your favorite Cascadian soccer players Several 2lb Slabs of Homemade Bacon Astro A10 Headset (Signed by Stefan Frei) Many videogames — Battlefield 1, FIFA 18, Madden, Destiny 2, and more! Super Nintendo Classic Edition – THREE of these! Titanfall 2 Collector\u0026rsquo;s Edition - Pilot Helmet! AND MUCH, MUCH MORE! OVER TWO HUNDRED PRIZES! SOME of the $50 Raffle Prizes # $500 Tattoo Gift Certificate Amazon Echo Game-Worn NHL Hockey Sweaters Homemade Zelda Quilt Red 2017 Keeper Kit - Signed by Stefan Frei Xbox One X Portland Timbers Team Signed Replica Kit MST3K 2017 Poster (signed by Joel Hodgson, Jonah Ray, Felicia Day, and Patton Oswalt) Pair of Game-Worn Keeper Gloves (Signed by Stefan Frei) 2017 MLS All-Star Jersey Signed By Team A bottle of 10 year Rip Van Winkle Bourbon AND MUCH MORE! ","date":"29 October 2017","externalUrl":null,"permalink":"/posts/extra-life-partial-prize-list/","section":"Posts","summary":"What you could win by giving money to kids — the Cascadian Gamers Extra Life 2017 prize list.","title":"Extra Life: The Partial Prize List","type":"posts"},{"content":"","date":"29 October 2017","externalUrl":null,"permalink":"/tags/gaming/","section":"Tags","summary":"","title":"Gaming","type":"tags"},{"content":"Every year some of my soccer buddies, who also happen to game, get together on the first weekend in November to do something really cool. They get together and game for 24 hours straight to raise money for the Children\u0026rsquo;s Miracle Network Hospital in the Pacific Northwest. What started out as a challenge (I\u0026rsquo;m going to lose my sanity around midnight, I promise) has grown into something unique with prizes, giveaways and a live stream of the entire event.\nThis year we have probably the biggest lot of games in addition to an Xbox One X, rare gaming items, whiskey, Sounders, Timbers and Whitecaps gear (read below before you freak out, soccer friends). And there will be more to come.\nWant to support us? Awesome. You can support the team here: Cascadian Gamers on Extra Life.org. All the money I raise will be going to Seattle Children\u0026rsquo;s Hospital. You can also win some really cool stuff.\nTHE CASCADIAN GAMERS SUPER MEGA RAFFLE™ # For every donation through anyone on the Cascadian Gamers team, you\u0026rsquo;ll get raffle tickets entered into our drawings. Those tickets remain until they\u0026rsquo;re drawn, so you have tons of chances to win! You can follow us on Twitter at @CascadianGamers for all the latest information!\nWe have TWO raffle tiers this year! $10 and $50 tickets! The $10 prize pool is packed with hundreds of items, such as games, a SNES Classic Edition, collectibles, Funko Pops, and much, much more! The $50 prize pool has some AMAZING prizes, such as game-worn hockey sweaters, autographed items from some of your favorite soccer players, and some surprise one-of-a-kind items!\nWhen you donate, please use the encouraging message field to tell us how you\u0026rsquo;d like to break up your tickets! If you don\u0026rsquo;t let us know, we\u0026rsquo;re going to make them all $10 tickets!\nAdditionally, if you\u0026rsquo;re a supporter or fan of the Sounders, Timbers, or Whitecaps, please let us know! (We can\u0026rsquo;t award some of these prizes to opposing fans. Sorry!)\nNOTE: You can only win ONE prize from the $50 pool and FIVE prizes from the $10 pool. Once your limit has been hit, your remaining tickets are withdrawn from the digital bucket!\n","date":"17 October 2017","externalUrl":null,"permalink":"/posts/extra-life-support-kids/","section":"Posts","summary":"Cascadian Gamers’ Extra Life 2017 — game for 24 hours to raise money for Children’s Miracle Network.","title":"Extra Life: Support Kids and Win Cool Stuff","type":"posts"},{"content":"","date":"24 September 2017","externalUrl":null,"permalink":"/tags/github/","section":"Tags","summary":"","title":"Github","type":"tags"},{"content":" Life Changes # So, it turns out that having a child isn\u0026rsquo;t conducive to blogging as much as I would like. I have been thinking about what I can commit to over the next year and while it probably won\u0026rsquo;t be blogging as much as I would like, I do feel like I can make a commitment of committing to GitHub every day.\nThere have been some other things, of course, but I will get into those in a future post.\nGoing Forward # I have overwritten my GitHub repository used for this blog and modernized it so that the Azure Functions now use the WebJobs SDK. The location is still the same though https://github.com/ssfcultra/AzureFunctionsBlogDemos. I will be adding a README.md for the prerequisites so that folks can fork the repository and run it in their own Azure Subscription.\nUntil next time, which won\u0026rsquo;t be far away (I promise), strive for excellence!\nKeith\n","date":"24 September 2017","externalUrl":null,"permalink":"/posts/its-been-a-while/","section":"Posts","summary":"Life changes and getting back to coding.","title":"It's Been A While","type":"posts"},{"content":"","date":"24 September 2017","externalUrl":null,"permalink":"/tags/personal/","section":"Tags","summary":"","title":"Personal","type":"tags"},{"content":"","date":"5 June 2017","externalUrl":null,"permalink":"/tags/algorithms/","section":"Tags","summary":"","title":"Algorithms","type":"tags"},{"content":" Introduction # This is the third and final part of the series. Part 1 set up ServiceBus Topics. Part 2 converted to ServiceBus Triggers.\nToday we add QuickFind, QuickUnion, and WeightedQuickUnion as ServiceBus Topic Triggers and analyze the performance.\nA Bug! # There\u0026rsquo;s an intriguing bug in the Console App: the Random class uses the CPU clock, so when creating both arrays in quick succession, they get the same seed and produce identical arrays. This means QuickUnion runs at optimal O(n) performance (merging each number to itself), while WeightedQuickUnion and WeightedQuickUnionWithPathCompression run at their worst (linear time with flat trees).\nThe fix: initialize Random as a static class member.\nAnalysis # QuickFind is always the slowest — O(n²) regardless of input. It will never be Usain Bolt.\nQuickUnion is better but with random input shows performance closer to quadratic. Worst case is O(n²).\nWeightedQuickUnion and WeightedQuickUnionWithPathCompression are the champions — at worst linear time, and with the bug fix showing logarithmic runtimes that are half the best QuickUnion runs.\nConclusion # We demonstrated four merge algorithms using Azure Functions with ServiceBus Topics. The code has DRY violations between the four triggers — a future post will explore refactoring using Inversion of Control.\nSource: AzureFunctionsBlogDemos\n","date":"5 June 2017","externalUrl":null,"permalink":"/posts/azure-functions-merge-algorithm-part-3/","section":"Posts","summary":"Part 3 of 3: Adding QuickFind, QuickUnion, and WeightedQuickUnion — analyzing algorithmic complexity and finding a bug in the test harness.","title":"Azure Functions: Merge Algorithm Runtime Comparison (Part 3)","type":"posts"},{"content":"","date":"5 June 2017","externalUrl":null,"permalink":"/tags/azure-functions/","section":"Tags","summary":"","title":"Azure-Functions","type":"tags"},{"content":"","date":"5 June 2017","externalUrl":null,"permalink":"/tags/csharp/","section":"Tags","summary":"","title":"Csharp","type":"tags"},{"content":"","date":"5 June 2017","externalUrl":null,"permalink":"/tags/performance/","section":"Tags","summary":"","title":"Performance","type":"tags"},{"content":" Introduction # This is the second part of a three part series on algorithmic runtime comparison. For the full intro check out the first post.\nToday we convert our HTTP Trigger into a ServiceBusTopic Trigger that reads from ServiceBus, runs the merge algorithm, writes output to Azure Blob Storage, and writes performance data to Azure Table Storage.\nKey Changes # The function now takes a MergingArray message from ServiceBus instead of an HTTP request. Since arrays are too large for Table Storage columns (64kb limit), we write them to Blob Storage as text files and only write performance metrics to Table Storage.\nThe function.json binds a serviceBusTrigger input (with topic name and subscription), Table Storage output, and Blob Storage output.\nTesting # A Console App sends 100 requests to MergeTrigger, which fans out to the ServiceBus subscriptions. Using Azure Storage Explorer, you can inspect the Merging table for performance data and the merging blob container for full input/output files.\nConclusion # We converted our HTTP Trigger to use ServiceBus Topics as input. Part 3 adds the remaining three algorithms and analyzes the performance differences.\n","date":"2 June 2017","externalUrl":null,"permalink":"/posts/azure-functions-merge-algorithm-part-2/","section":"Posts","summary":"Part 2 of 3: Converting an HTTP Trigger to a ServiceBus Topic Trigger — reading from ServiceBus, writing to Blob and Table Storage.","title":"Azure Functions: Merge Algorithm Runtime Comparison (Part 2)","type":"posts"},{"content":"","date":"2 June 2017","externalUrl":null,"permalink":"/tags/blob-storage/","section":"Tags","summary":"","title":"Blob-Storage","type":"tags"},{"content":"","date":"2 June 2017","externalUrl":null,"permalink":"/tags/service-bus/","section":"Tags","summary":"","title":"Service-Bus","type":"tags"},{"content":" Introduction # One of the things I really enjoy while writing variations of algorithms is comparing the total run-time between them. I find it rewarding to start with a minimal amount of iterations, say 10 merges, where QuickFind (O(n²)) and WeightedQuickUnion (O(m log n)) look almost the same. Then add more iterations to see the time savings become apparent.\nFor today\u0026rsquo;s demo we compare four algorithms from Robert Sedgewick\u0026rsquo;s Algorithms in Java: QuickFind, QuickUnion, WeightedQuickUnion, and WeightedQuickUnionWithPathCompression.\nThe architecture: an HTTPTrigger writes a message to a Service Bus Topic with Subscriptions for each algorithm. Each algorithm runs as a ServiceBusTopicTrigger, writes output to Blob Storage and performance data to Table Storage.\nPrerequisites # Azure ServiceBus (Standard tier, ~$10/month for Topics) Source: AzureFunctionsBlogDemos Adding the Topic # Create a Service Bus namespace in the Azure Portal, grab the connection string from Shared Access Policies, and add it as AzureWebJobsServiceBus in your Function App settings.\nAdding the Code # The MergeTrigger HTTP Trigger receives the arrays, creates the ServiceBus Topic and Subscriptions (one per algorithm), and sends the message. The refactored MergingArray.cs contains all four algorithms with a Merge() method that dispatches based on an enum.\nConclusion # Today we set up ServiceBus Topics so one HTTP request fans out to four algorithm subscriptions. Part 2 converts our HTTP Trigger to a ServiceBus Trigger. Part 3 adds the remaining algorithms and analyzes performance.\n","date":"2 June 2017","externalUrl":null,"permalink":"/posts/azure-functions-merge-algorithm-part-1/","section":"Posts","summary":"Part 1 of 3: Setting up Azure ServiceBus Topics to compare merge algorithm performance across QuickFind, QuickUnion, WeightedQuickUnion and WeightedQuickUnionWithPathCompression.","title":"Azure Functions: Merge Algorithm Runtime Comparison (Part 1)","type":"posts"},{"content":" Introduction # Recently, I have become more interested in the fundamentals of computer science so I can become a better programmer and engineer.\nToday we will create a simple sorting Function that will take two arrays as input, write each incoming request to Azure Table Storage and return the sorted array as well as some statistics on how long the Function ran.\nToday\u0026rsquo;s example uses the WeightedQuickUnionWithPathCompression algorithm from Robert Sedgewick\u0026rsquo;s Algorithms in Java, Third Edition. He and Kevin Wayne have published a fourth edition which you can read online. If you\u0026rsquo;re interested in going further they offer their course free of charge on Coursera.\nPrerequisites # Today\u0026rsquo;s demo assumes your Function is set up as a class library. Source code: AzureFunctionsBlogDemos. Adding the Function # We add a WeightedQuickUnionWithPathCompression class that parses an HTTP request containing two arrays, runs the merge algorithm, writes the result to Azure Table Storage, and returns the sorted output with runtime statistics.\nThe function.json binds an HTTP trigger for input and Azure Table Storage for output. The Arrays.cs shared class contains multiple merge algorithms (QuickFind, QuickUnion, WeightedQuickUnion, WeightedQuickUnionWithPathCompression) that we\u0026rsquo;ll compare in future posts.\nRun and Test # Hit F5, grab the URL, and use Postman to send a POST with two arrays of integers. The function merges them, writes to Table Storage, and returns the result with runtime.\nUsing Azure Storage Explorer, you can inspect the \u0026ldquo;Sorting\u0026rdquo; table to see inputs, outputs, and runtimes.\nConclusion # Today we added a new Function that takes two arrays as input and merges them. We\u0026rsquo;ve taken advantage of our continuous deployment and can see every request written to Azure Table Storage with performance data.\nWhat\u0026rsquo;s next: # Merge Algorithm Runtime Comparison series — comparing QuickFind, QuickUnion, WeightedQuickUnion and WeightedQuickUnionWithPathCompression using Azure ServiceBus Topics. ","date":"26 May 2017","externalUrl":null,"permalink":"/posts/azure-functions-going-beyond-hello-world/","section":"Posts","summary":"Creating a sorting Function that writes to Azure Table Storage — using the WeightedQuickUnionWithPathCompression algorithm.","title":"Azure Functions: Going Beyond Hello World!","type":"posts"},{"content":"","date":"26 May 2017","externalUrl":null,"permalink":"/tags/table-storage/","section":"Tags","summary":"","title":"Table-Storage","type":"tags"},{"content":"Admittedly, the Function that we created last month didn\u0026rsquo;t do a whole lot. We sent in a \u0026ldquo;name\u0026rdquo; and got a welcome message back. That\u0026rsquo;s okay though as we have something that can be added to source control.\nSource Control Options # Visual Studio Team Services Local Git Repository GitHub BitBucket Dropbox Azure Functions gives the developer a wide range of options. For this demo let\u0026rsquo;s use GitHub!\nIf you are not yet familiar with Git-based source control I recommend: Continuous Deployment For Azure Functions.\nOpen the Azure Portal, navigate to your Function App, select \u0026ldquo;Deployment Options\u0026rdquo; under Code Deployment, click Setup, select GitHub, authorize, and optionally add Performance Testing.\nOnce deployed, we can make a change in VS Code, commit and push to trigger a new deployment:\ngit add -A git commit -m \u0026#34;Update showing CI/CD works.\u0026#34; git push origin master Conclusion # We now have CI/CD working from GitHub. We can make changes, push them, and see the results immediately in our Function.\nWhat\u0026rsquo;s next: # Going beyond Hello World — adding complexity with Table Storage Further Reading # Azure Docs: Functions Best Practices Continuous Deployment for Azure Functions ","date":"25 May 2017","externalUrl":null,"permalink":"/posts/azure-functions-deploy-from-source-control/","section":"Posts","summary":"Setting up continuous deployment for Azure Functions from GitHub.","title":"Azure Functions: Deploy From Source Control","type":"posts"},{"content":"","date":"25 May 2017","externalUrl":null,"permalink":"/tags/cicd/","section":"Tags","summary":"","title":"Cicd","type":"tags"},{"content":"","date":"22 May 2017","externalUrl":null,"permalink":"/tags/family/","section":"Tags","summary":"","title":"Family","type":"tags"},{"content":"It has been almost a month since my last post. Things got a little busy. For observant types I have changed the \u0026ldquo;About\u0026rdquo; section of the blog from \u0026ldquo;Soon to be dad\u0026rdquo; to read \u0026ldquo;Dad.\u0026rdquo;\nTheodore Sebastien Hodo was born April 29th at 18:34 at Swedish Ballard. Things were a little complicated at first but now everyone is doing quite well. There\u0026rsquo;s more to the story but I will save this for a future blog post.\nThe last few weeks I have been on paternity leave as we transition to our new life as a family. Originally, I planned on learning more about data structures and algorithms during this time to fill in some of the gaps in my computer science knowledge (as well as blogging daily). And, of course, as any new parent knows, it didn\u0026rsquo;t quite work out that way.\nAlso, like most new parents, I can\u0026rsquo;t resist sharing some photos of our new addition.\nA quick thank you to Swedish Medical Center Ballard and First Hill. Their care for our family was amazing and will forever be grateful. Thanks to my employer, Avanade, and my colleagues who were very understanding, accommodating and compassionate. Lastly, thanks to our insurance provider Premera. Their staff was very helpful whenever we had a question and I\u0026rsquo;m grateful they are with us on this journey.\nI plan on getting back into the normal flow tomorrow where we\u0026rsquo;ll expand on the Azure Functions tutorials. There are some exciting events that came out of build so look for that.\nUntil tomorrow\u0026hellip; Be excellent.\nKeith\n","date":"22 May 2017","externalUrl":null,"permalink":"/posts/hello-world-theodore-sebastien-hodo/","section":"Posts","summary":"Welcoming Theodore Sebastien Hodo to the world.","title":"Hello World: Theodore Sebastien Hodo is Here!","type":"posts"},{"content":"Yesterday we covered how to develop and debug an Azure Function using Visual Studio. Today we will cover developing and debugging Azure Functions using Visual Studio Code and Powershell.\nPrerequisites # Visual Studio Code Powershell (or bash) An Azure Subscription Azure Command Line Interface Why Visual Studio Code? # VS Code provides a lightweight, cross-platform alternative to Visual Studio. It provides the same rich experience whether you\u0026rsquo;re on Windows, macOS or Linux. I started using VS Code because I wanted to keep my hands on the keyboard as much as possible. If you\u0026rsquo;re not quite sold yet read more here.\nCreate and Test a New Function # Navigate to your project directory in Powershell and run code . to launch VS Code. Then run func new to create a new HttpTrigger. Update line 20 in run.csx to return a different message:\nreq.CreateResponse(HttpStatusCode.OK, \u0026#34;Hello \u0026#34; + name + \u0026#34; from Visual Studio Code!\u0026#34;); Run func run to launch the local Function Host, then test with Postman using your local URL. The great thing with VS Code is you can update your Function, save it, and see the update on the next request.\nConclusion # We now have a lightweight, cross-platform way to develop and test Azure Functions using VS Code and Powershell.\nWhat\u0026rsquo;s next:\nDeploy from source control Going beyond Hello World ","date":"28 April 2017","externalUrl":null,"permalink":"/posts/azure-functions-developing-vs-code-powershell/","section":"Posts","summary":"Developing and debugging Azure Functions using VS Code and Powershell as a lightweight alternative to Visual Studio.","title":"Azure Functions: Developing in Visual Studio Code and Powershell","type":"posts"},{"content":"","date":"28 April 2017","externalUrl":null,"permalink":"/tags/powershell/","section":"Tags","summary":"","title":"Powershell","type":"tags"},{"content":"","date":"28 April 2017","externalUrl":null,"permalink":"/tags/vscode/","section":"Tags","summary":"","title":"Vscode","type":"tags"},{"content":"Admittedly, the Function that we created yesterday doesn\u0026rsquo;t do a whole lot more than respond to an incoming HTTP request with \u0026ldquo;Hello\u0026rdquo; and a name. Let\u0026rsquo;s add a little bit of complexity to our demo by creating a new function using Visual Studio.\nPrerequisites # Visual Studio 2015 (Any Edition) Azure Subscription Azure 2.9.6 .NET SDK Visual Studio Tools for Azure Functions Node + NPM Azure Functions CLI Download the \u0026ldquo;Hello World\u0026rdquo; Function # Navigate to your Azure Functions App in the portal. Click Platform Features → Advanced tools (Kudu) → Debug console → Powershell. Navigate to \u0026ldquo;site\u0026rdquo; and download the contents of wwwroot as a .zip file.\nAdding our Hello Function to a Solution # Create an empty solution in Visual Studio. You can either:\nCreate a native Azure Function project (simplest approach) — copy the wwwroot contents into it and start debugging with F5. Create an empty ASP.Net Web Application following Microsoft\u0026rsquo;s recommendation to use class libraries. Either way, you can debug locally using F5 and test with Postman using your local URL.\nConclusion # We\u0026rsquo;ve gone from hosting our Hello World Function on Azure App Service to being able to develop and debug locally.\nWhat\u0026rsquo;s next:\nDeveloping in VS Code and Powershell Deploy from source control Going beyond Hello World ","date":"26 April 2017","externalUrl":null,"permalink":"/posts/azure-functions-develop-debug-visual-studio/","section":"Posts","summary":"Setting up local development and debugging for Azure Functions using Visual Studio 2015.","title":"Azure Functions: Develop and Debug Using Visual Studio","type":"posts"},{"content":"","date":"26 April 2017","externalUrl":null,"permalink":"/tags/visual-studio/","section":"Tags","summary":"","title":"Visual-Studio","type":"tags"},{"content":"This past week I started playing around with Azure Functions. They offer an inexpensive way to get started with microservices and serverless architecture in Azure.\nEssentially, they allow the developer to write small, loosely coupled services using infrastructure that is completely maintained by the cloud service provider. There is no need to worry about scaling of your Functions or OS patching as the cloud service will handle both of these concerns for you.\nGetting Started # The most simple way to get started doesn\u0026rsquo;t require an Azure subscription. Just head over the Azure Functions page and click the \u0026ldquo;Try if for free\u0026rdquo; button. Then click the \u0026ldquo;Webhook + API\u0026rdquo; button and choose your language (for this demo I\u0026rsquo;ve chosen C#). Once the setup is done you have a fully functional Webhook.\nTesting Your Webhook # Prerequisite: download Postman.\nAt this point you can use the portal to send in HTTP requests or you can use Postman. To use the portal just click on the \u0026ldquo;Test\u0026rdquo; link to the right side of the code editor. Change the name parameter to your name and click run. You should see \u0026ldquo;Hello %YourName%\u0026rdquo;.\nNext, let\u0026rsquo;s use Postman. Grab the URL from \u0026ldquo;Get Function URL\u0026rdquo; above the code editor. Open Postman, change the HTTP verb to \u0026ldquo;POST\u0026rdquo;, paste the URL, add a Content-Type: application/json header, and in the Body tab (raw) paste:\n{ \u0026#34;name\u0026#34;: \u0026#34;Keith\u0026#34; } Hit Send and you should see \u0026ldquo;Hello Keith\u0026rdquo;!\nUsing Your Azure Subscription # If you have an Azure subscription, create a Functions App through the Azure Portal. Create a new HttpTrigger Function and test it the same way.\nConclusion # While our newly created Webhook doesn\u0026rsquo;t do a whole lot we have shown how easy it is to get started with Azure Functions.\nWhat\u0026rsquo;s next: # Develop and debug using Visual Studio Develop using VS Code and Powershell Deploy from source control Going beyond Hello World ","date":"25 April 2017","externalUrl":null,"permalink":"/posts/azure-functions-getting-started/","section":"Posts","summary":"Getting started with Azure Functions — creating your first webhook with C#.","title":"Azure Functions: Getting Started/Hello World","type":"posts"},{"content":"","date":"25 April 2017","externalUrl":null,"permalink":"/tags/serverless/","section":"Tags","summary":"","title":"Serverless","type":"tags"},{"content":"","date":"22 April 2017","externalUrl":null,"permalink":"/tags/career/","section":"Tags","summary":"","title":"Career","type":"tags"},{"content":"","date":"22 April 2017","externalUrl":null,"permalink":"/tags/certification/","section":"Tags","summary":"","title":"Certification","type":"tags"},{"content":" Developing Microsoft Azure Solutions # If you want to skip the journey, and my pitfalls, please skip to the section What You Need to Know.\nAs a developer and senior consultant part of my job is to stay up to date on new technologies. Early last year a couple colleagues and I set out to earn our Microsoft Certified Developer (MCSD) in Azure Solutions. There are three exams you need to pass in order to earn the certification 70-532 (discussed here), 70-533 and 70-534 (which will be out of scope for this post).\nIt looks like Microsoft learning has updated the track and now the exam is part of the MCSA track. Read more about becoming a Microsoft Certified Solutions Architect here.\nI set out on the journey to becoming certified with the idea that I could study, pass the three exams and then I would be set for the next decade as a developer. After all, Azure is the next C#, right? I couldn\u0026rsquo;t have been more wrong.\nStudying for this exam has been a rich and rewarding experience. Because it covers such a broad subject matter I am better prepared to handle scenarios that used to cause me fear. I have found that I really enjoy learning and sharing this knowledge with others. And lastly, I have seen how I develop software and the tools I use change and grow in ways that I could not have anticipated before I started on this journey.\nWhat is the 70-532 Exam # The 70-532 exam is an intentionally broad exam that covers a lot of the Azure platform. The objectives are still \u0026ldquo;technically\u0026rdquo; the same as they have been for a few years but the platform is constantly evolving as new technologies are added to the platform.\nThe exam covers four major areas:\nAzure Resource Manager Virtual Machines Design and Implement a Storage and Data Strategy Manage Identity, Application, and Network Services Design and Implement Azure PaaS Compute and Web and Mobile Services Where Things Faltered A Bit # When my colleagues and I started studying for the exam we picked up Zoiner Tejada\u0026rsquo;s book \u0026ldquo;Exam Ref 70-532 Developing Microsoft Azure Solutions.\u0026rdquo;\nNote: as an example of how quickly Azure is changing as a platform, this book is now out of date and was originally published in February of 2015. By the time you come across this post there may be more topics and technologies introduced to the platform.\nOur goal was to go through Mr. Tejada\u0026rsquo;s book one chapter per week and finish it in five weeks with the final week used to study. The book was definitely excellent but it covered a bit too much. Azure was undergoing a change in the platform as they were changing from the Classic Azure Portal to the new Azure Portal. So, examples had to be given for both the old portal and the new portal. Luckily, the modern exam only covers the new portal so you don\u0026rsquo;t need to worry about the two portals anymore.\nGetting Back on Track # Things got back on track last summer when a manager at my company decided to run an \u0026ldquo;Azure boot camp\u0026rdquo; to prepare folks for taking the 70-532. We all registered for the Measure Up exam and did a combination of group and self study. The group would go over one section of the exam each week and then we would go over any areas we had questions on our own.\nRecommendations for Self-Study # For this section you\u0026rsquo;re going to need a free account on edX.com and a Pluralsight account. Note: Pluralsight has a free one-month trial so you can do all of these for free. Lastly, you will need an Azure Subscription. If you don\u0026rsquo;t already have one you can sign up for a free Azure trial subscription.\nStart with Tim Warner\u0026rsquo;s course Prepare to Pass the Azure Solutions (70-532) Exam. Highly recommended:\nNext check out Sidney Andrews\u0026rsquo; excellent course DEV233x Developing Microsoft Azure Solutions on edX.com. I recommend going through all five modules and doing every example along with him. John Papa and Shayne Boyer\u0026rsquo;s course Play by Play: Understanding API Functionality Through Swagger is also a must. Mark Heath\u0026rsquo;s course Azure Functions Fundamentals is highly recommended as functions are being pushed heavily on the platform. What You Need to Know # Let\u0026rsquo;s go through the need to know section by section:\nAzure Resource Manager Virtual Machines: know how to create and deploy virtual machines using ARM Templates as well as the Azure CLI. Know how to prepare an on-premise machine and move it to the cloud. Know how to create a generalized image for building out a farm of machines. Design and Implement a Storage and Data Strategy: know what the different storage skus are for. Know fault and update domains in both ARM and classic models (they\u0026rsquo;re different). Know how to create a resource group and a disk using Powershell as well as Azure CLI. Know the different types of storage and what they are used for. Manage Identity, Application, and Network Services: Know how to set up basic Azure AD scenarios for authentication scenarios. Know how to create a VNET and assign a machine to it. Know load balancing scenarios. Design and Implement Azure PaaS Compute and Web and Mobile Services: Know a bit about Azure Functions. Know how to use Document DB. Know how to make your Web App testable and have well defined end-points using Swagger. Know how to setup a Logic App. Finally\u0026hellip; Take the Exam # Once you\u0026rsquo;re done studying be sure to pick a time and a place for your exam. They are now allowing for you to take the exam online so you can potentially take it from a home office.\nIn Conclusion # This should give anyone a pretty solid study guide for the 70-532 / Course 20532C Exam. If you don\u0026rsquo;t pass on the first try, please regard it as a learning experience and attempt the exam again. It took me two tries to pass the exam and the first attempt I didn\u0026rsquo;t pass due to just a few wrong answers! The subject matter is vast so don\u0026rsquo;t get too down on yourself if you don\u0026rsquo;t pass the first time out!\nGood Luck! # ","date":"22 April 2017","externalUrl":null,"permalink":"/posts/getting-certified-passing-the-70-532/","section":"Posts","summary":"Study guide and tips for passing the Microsoft Azure 70-532 exam.","title":"Getting Certified: Passing the 70-532 / Course 20532C Exam","type":"posts"},{"content":"One of the biggest challenges I have been facing recently is deciding where I am going to store content. For work I have been using Microsoft OneNote fairly extensively. At home my wife and I were using it for a bit as well when we were trying to create to-do lists. But it became a bit cumbersome and we switched over to Google Keep (mostly for those lovely honey-do lists). Both applications work great for synchronizing lists and notes with our various devices. But what happens when you want to share things with the public?\nWhy Ghost? # I was looking for an easy-to-manage solution for blogging and thought: \u0026ldquo;oh, I\u0026rsquo;ll just go and set-up a WordPress solution using Azure\u0026rsquo;s PaaS (platform as a service) Wordpress solution and let it run on Linux.\u0026rdquo; Sounded great. Until I realized that I don\u0026rsquo;t know PHP and administration would be a bit cumbersome. So, I went looking for a solution and stumbled upon Scott Hanselman\u0026rsquo;s update about deploying Ghost into Azure directly from Git. Awesome. Just a few clicks and the blog was up and running.\nSetup # Resources # Scott Hanselman: Deploy Ghost Using Deploy to Azure Before You Click \u0026ldquo;Deploy to Azure\u0026rdquo; # I am not going to reinvent the wheel here as the deployment piece is already done. If you want to simply get something up on the Web all you need is an Azure Subscription and the Deploy to Azure button will take care of everything for you.\nThat being said, because I like the additional control, there are just a couple extra considerations that I would like to bring up.\nFirst, in your Azure subscription you can host your blog for free by creating an App Service in the Free tier if you simply want a blog and don\u0026rsquo;t need a custom domain. This is great for folks that are just looking to get started or don\u0026rsquo;t need some of the extra features.\nNote: The first tier for Web Apps where you will get a custom domain, and thus SSL, is in the \u0026ldquo;Basic\u0026rdquo; tier.\nSecond, take a moment and decide how you want to manage your ghost install and it\u0026rsquo;s content. Do you want it to be in source control (Git, Visual Studio Team Services or whatever repository you like)? This will give you a little extra control if you want to update the ghost components or set up continuous integration/continuous delivery (CI/CD).\nThird, do you want to be able to control your infrastructure for your blog in a programmatic way? In this case you can use Powershell, ARM Templates or the Azure CLI to build out your Web App, Storage and other components for you on demand.\nCustom Domains # Note: The first tier that will allow you to have a custom domain is currently the \u0026ldquo;Shared\u0026rdquo; tier. If you\u0026rsquo;re hosting in the \u0026ldquo;Free\u0026rdquo; tier you will not have access to the features from here on out.\nFor this blog I wanted to go a few steps further and host it under a custom domain. First, I registered the custom domain. Second, I chose an appropriate pricing tier for my Web App in Azure. However, if you\u0026rsquo;ve already chosen a tier for you Web App don\u0026rsquo;t worry, you can easily change the setting in the Azure Portal.\nSetting up the custom domain was easy. I want to host the blog at the site you\u0026rsquo;re viewing it at. I just had to set up a couple \u0026ldquo;A\u0026rdquo; records and \u0026ldquo;TXT\u0026rdquo; records with my domain registrar and configure a couple settings in the Azure Portal under my web app. It\u0026rsquo;s all pretty straight-forward.\nMapping a Custom Domain Name\nSSL Encryption # Note: you will need to set up your custom domain before you get to this part. Otherwise you cannot set up SSL.\nThis was actually the tricky bit. Troy Hunt wrote a great article about using Let\u0026rsquo;s Encrypt as an App within your Azure subscription.\nResources:\nTroy Hunt: Everything you need to know about loading a free Let\u0026rsquo;s Encrypt certificate into an Azure website Azure Docs: Setting up Azure AD Setting Up Let\u0026rsquo;s Encrypt for Ghost It is a bit involved as there are a few steps that you will need to follow in sequence. Again, I\u0026rsquo;m not going to rewrite his post but here\u0026rsquo;s a couple gotchas that arose while I was adding the SSL certificate from Let\u0026rsquo;s Encrypt:\nWhen you create your app in Azure AD (called Let\u0026rsquo;s Encrypt) Troy\u0026rsquo;s post mentions a \u0026ldquo;clientid\u0026rdquo;. This id has changed names to be called \u0026ldquo;Application ID\u0026rdquo; and is a Guid. You\u0026rsquo;ll want to take note of this and copy this Guid to notepad.\nWhen you create your keys for your Let\u0026rsquo;s Encrypt app, I created two in case I need to rotate them, you will want to take note of these and copy them to notepad.\nIn the Azure Portal when you are looking for your \u0026ldquo;Tenant Name\u0026rdquo; you can find this by hovering over your photo in the upper right hand corner of the portal. This is no longer displayed by clicking on your photo and you can\u0026rsquo;t copy/paste it. In fact, the tenant is listed as \u0026ldquo;Domain\u0026rdquo;. You will need to take note of this and copy it to notepad.\nOnce you have added your keys and secrets to you Web App you will need to open Kudu and update your Web.config settings under \u0026ldquo;site/wwwroot/Web.config\u0026rdquo;. See this article towards the bottom for the updated settings you need to paste into Web.config.\nConclusion # That pretty much wraps it up. If you have any questions, need any clarifications or just want to report back on your successes please leave a comment or send me a Tweet.\n","date":"21 April 2017","externalUrl":null,"permalink":"/posts/setting-up-your-own-blog-using-ghost/","section":"Posts","summary":"How to set up a Ghost blog on Azure with custom domains and SSL.","title":"Setting Up Your Own Blog Using Ghost","type":"posts"},{"content":" First Steps # You have to crawl before you can walk. And just like when I wrote for the paper at The Daily at University of Washington, getting the first words down on paper (or in your text editor) is usually the hardest part. So here we are, shiny and new and with the greatest of intentions.\nWhat You\u0026rsquo;ll Find Here # My intent with this blog is to write about things I find interesting. Programming, namely with C# on Microsoft Azure, music, soccer, community. All the good stuff.\nWho are ya? # There\u0026rsquo;s a lot that makes up a person. There\u0026rsquo;s already plenty of content out there so let\u0026rsquo;s not reinvent the wheel.\nProfessional profile:\nLinkedIn GitHub Personal:\nEmerald City Supporters Connect:\nTwitter Keybase.io ","date":"20 April 2017","externalUrl":null,"permalink":"/posts/hello-world-first/","section":"Posts","summary":"First steps — getting the blog started.","title":"Hello World!","type":"posts"},{"content":"","date":"20 April 2017","externalUrl":null,"permalink":"/tags/introduction/","section":"Tags","summary":"","title":"Introduction","type":"tags"},{"content":"I\u0026rsquo;m Keith — a Solutions Architect at Amazon Web Services, dad, and occasional gamer based in the Pacific Northwest.\nBy day, I\u0026rsquo;m a Deloitte-dedicated technical resource covering Telco, Media \u0026amp; Entertainment, Gaming \u0026amp; Sports, and Life Sciences in the US. I also work across all verticals on Security, Builder Tools \u0026amp; Experience (Next-Generation Developer Experience), and Generative AI. I recommend solutions, review technical architectures, conduct pricing reviews, and help Deloitte practitioners build and scale on AWS.\nWhat I\u0026rsquo;m most excited about right now is agentic AI — and I\u0026rsquo;m not just advising on it, I\u0026rsquo;m building with it. I\u0026rsquo;ve built my own AI-powered productivity system using Amazon Bedrock, AgentCore, and MCP (Model Context Protocol) that handles workflows I used to do manually: cost modeling, technical analysis, session management, and content generation. It\u0026rsquo;s a proving ground for the same patterns I recommend to Deloitte and their customers. The best architects are the ones who build.\nBefore AWS, I spent over a decade in the Microsoft/.NET ecosystem as a software engineer and consultant — shipping production code, working inside client organizations, and understanding the consulting business model from the inside. I know what it\u0026rsquo;s like to be on the other side of the table.\nI also host technical education content on Twitch and LinkedIn Live in partnership with AWS Training \u0026amp; Certification. My series on Amazon Nova, Cloud Essentials, and AWS Certification prep have reached 82,000+ learners. I love translating complex technical concepts into something accessible — whether that\u0026rsquo;s a live stream, a white-paper, or a 30-minute architecture review.\nBeyond my core role, I mentor early-career and tenured architects, participate in a technical field community focused on next-generation developer tools, and have conducted 15+ candidate conversations to help people understand what it\u0026rsquo;s really like to work at AWS.\nThis blog started in 2017 as a way to document what I was learning about Azure Functions, algorithms, and cloud architecture. Life got busy (Theodore arrived!), but the itch to write never fully went away. When I\u0026rsquo;m not coding, you\u0026rsquo;ll find me on the soccer pitch or gaming with the Cascadian Gamers crew for Extra Life.\nIf you\u0026rsquo;re working on agentic AI, cloud migration, GenAI architecture, or solution development with Deloitte and AWS — let\u0026rsquo;s connect.\nFind me on Twitter/X or GitHub.\n","externalUrl":null,"permalink":"/about/","section":"Keith Hodo","summary":"","title":"About","type":"page"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]