Building Production AI Applications with .NET 8 and C# 12

When .NET 8 and C# 12 were released, I was skeptical. After 15 years building enterprise applications, I’d seen framework updates come and go. But this release changed everything for AI development. Let me show you how to build production AI applications with .NET 8 and C# 12—using actual C# code, not Python wrappers. Figure […]

Read more →

LLM Output Formatting: JSON Mode, Pydantic Parsing, and Template-Based Outputs

Introduction: LLM outputs are inherently unstructured text, but applications need structured data—JSON objects, typed responses, specific formats. Getting reliable structured output requires careful prompt engineering, output parsing, validation, and error recovery. This guide covers practical output formatting techniques: JSON mode and structured outputs, Pydantic-based parsing, format enforcement with retries, template-based formatting, and strategies for handling […]

Read more →

Building LLM Agents with Tools: From Simple Loops to Production Systems

Introduction: LLM agents extend language models beyond text generation into autonomous action. By connecting LLMs to tools—web search, code execution, APIs, databases—agents can gather information, perform calculations, and interact with external systems. This guide covers building tool-using agents from scratch: defining tools with schemas, implementing the reasoning loop, handling tool execution, managing conversation state, and […]

Read more →

Building Chatbots with Personality: Using AI to Enhance User Experience

Over the past two decades of building enterprise software systems, I’ve watched conversational AI evolve from simple rule-based decision trees to sophisticated agents capable of nuanced, context-aware dialogue. Having architected chatbot solutions for financial services, healthcare, and e-commerce platforms, I’ve learned that the difference between a chatbot users tolerate and one they genuinely enjoy interacting […]

Read more →

Prompt Template Management: Engineering Discipline for LLM Prompts

Introduction: Prompts are the interface between your application and LLMs. As applications grow, managing prompts becomes challenging—they’re scattered across code, hard to version, and difficult to test. A prompt template system brings order to this chaos. It separates prompt logic from application code, enables versioning and A/B testing, and makes prompts reusable across different contexts. […]

Read more →