Azure SDK January 2026: Microsoft Foundry Agents Service, GPT-5 Integration, and Knowledge Base APIs

Microsoft’s January 2026 Azure SDK release represents the most significant AI-focused update in the SDK’s history. This release introduces the Microsoft Foundry Agents Service integration, support for GPT-5, GPT-5-mini, and GPT-5-nano models in Azure AI Search, and a complete rebranding of “Knowledge Agent” to “Knowledge Base” with expanded capabilities. In this comprehensive guide, we’ll explore these features with production-ready code patterns, architectural considerations, and integration strategies for enterprise .NET applications.

What’s New in the January 2026 Azure SDK

The release includes updates across multiple Azure AI packages:

PackageVersionKey Updates
Azure.AI.Projects1.0.0GA release with Foundry Agents Service
Azure.AI.Projects.OpenAI1.0.0New package for OpenAI integration
Azure.Search.Documents12.3.0GPT-5 models, facet aggregation, hybrid search
Azure.AI.Evaluation1.2.0Red teaming, insights, expanded metrics
Microsoft.Azure.WebJobs.Extensions.WebPubSubForSocketIO1.0.0Stable release for real-time Functions

Microsoft Foundry Agents Service: Enterprise-Grade AI Agents

The Foundry Agents Service provides a managed infrastructure for deploying, scaling, and observing AI agents in production. Unlike building agents from scratch with Semantic Kernel or AutoGen, Foundry Agents handles the operational complexity while you focus on agent logic.

Architecture Overview

graph TB
    subgraph YourCode ["Your Application"]
        App["ASP.NET Core API"]
        SDK["Azure.AI.Projects SDK"]
    end
    
    subgraph FoundryService ["Azure Foundry Agents Service"]
        Orchestrator["Agent Orchestrator"]
        Memory["Conversation Memory"]
        Tools["Tool Execution Runtime"]
        Observability["Telemetry & Tracing"]
    end
    
    subgraph AIModels ["Azure OpenAI"]
        GPT5["GPT-5"]
        GPT5Mini["GPT-5-mini"]
        Embeddings["text-embedding-3-large"]
    end
    
    subgraph KnowledgeSources ["Knowledge Sources"]
        Search["Azure AI Search"]
        SharePoint["SharePoint Online"]
        OneLake["OneLake"]
    end
    
    App --> SDK
    SDK --> Orchestrator
    Orchestrator --> Memory
    Orchestrator --> Tools
    Orchestrator --> Observability
    Orchestrator --> GPT5
    Orchestrator --> GPT5Mini
    Tools --> Search
    Tools --> SharePoint
    Tools --> OneLake
    
    style App fill:#E3F2FD,stroke:#1565C0
    style Orchestrator fill:#E8F5E9,stroke:#2E7D32
    style GPT5 fill:#FFF3E0,stroke:#EF6C00

Getting Started with Azure.AI.Projects

Install the new packages:

# Install the GA packages
dotnet add package Azure.AI.Projects --version 1.0.0
dotnet add package Azure.AI.Projects.OpenAI --version 1.0.0
dotnet add package Azure.Identity

Creating Your First Foundry Agent

using Azure.AI.Projects;
using Azure.AI.Projects.OpenAI;
using Azure.Identity;

// Initialize the Foundry client
var connectionString = Environment.GetEnvironmentVariable("AZURE_AI_PROJECT_CONNECTION_STRING");
var client = new AIProjectClient(connectionString, new DefaultAzureCredential());

// Create an agent with GPT-5 and tools
var agentDefinition = new AgentDefinition
{
    Name = "CustomerSupportAgent",
    Description = "Handles customer inquiries using company knowledge base",
    Model = "gpt-5",  // New GPT-5 model support!
    Instructions = """
        You are a helpful customer support agent for Contoso Electronics.
        Always be polite and professional.
        Use the knowledge base tool to find accurate product information.
        If you cannot find an answer, escalate to a human agent.
        """,
    Tools = new List<AgentTool>
    {
        new KnowledgeBaseTool
        {
            KnowledgeBaseId = "contoso-products-kb",
            Description = "Search product documentation and FAQs"
        },
        new CodeInterpreterTool(),  // For calculations and data analysis
        new FunctionTool
        {
            Name = "create_support_ticket",
            Description = "Create a support ticket for human follow-up",
            Parameters = BinaryData.FromObjectAsJson(new
            {
                type = "object",
                properties = new
                {
                    summary = new { type = "string" },
                    priority = new { type = "string", @enum = new[] { "low", "medium", "high" } },
                    customer_email = new { type = "string", format = "email" }
                },
                required = new[] { "summary", "priority" }
            })
        }
    }
};

var agent = await client.CreateAgentAsync(agentDefinition);
Console.WriteLine($"Agent created: {agent.Value.Id}");

Running Conversations with Memory

Foundry Agents automatically manage conversation history and context:

// Create a conversation thread
var thread = await client.CreateThreadAsync();

// Add a user message
await client.CreateMessageAsync(thread.Value.Id, new ThreadMessage
{
    Role = MessageRole.User,
    Content = "I bought a wireless headset last week but the Bluetooth keeps disconnecting. " +
              "Order number is ORD-12345. Can you help?"
});

// Run the agent on the thread
var run = await client.CreateRunAsync(thread.Value.Id, agent.Value.Id);

// Poll for completion (or use streaming)
while (run.Value.Status == RunStatus.InProgress || run.Value.Status == RunStatus.Queued)
{
    await Task.Delay(1000);
    run = await client.GetRunAsync(thread.Value.Id, run.Value.Id);
    
    // Handle tool calls if needed
    if (run.Value.Status == RunStatus.RequiresAction)
    {
        var toolOutputs = new List<ToolOutput>();
        foreach (var toolCall in run.Value.RequiredAction.SubmitToolOutputs.ToolCalls)
        {
            if (toolCall.Function.Name == "create_support_ticket")
            {
                var args = JsonSerializer.Deserialize<CreateTicketArgs>(toolCall.Function.Arguments);
                var ticketId = await CreateSupportTicketAsync(args);
                toolOutputs.Add(new ToolOutput(toolCall.Id, $"Ticket created: {ticketId}"));
            }
        }
        run = await client.SubmitToolOutputsAsync(thread.Value.Id, run.Value.Id, toolOutputs);
    }
}

// Get the agent's response
var messages = await client.GetMessagesAsync(thread.Value.Id);
var lastAssistantMessage = messages.Value
    .Where(m => m.Role == MessageRole.Assistant)
    .OrderByDescending(m => m.CreatedAt)
    .First();

Console.WriteLine($"Agent response: {lastAssistantMessage.Content}");

GPT-5 Models in Azure AI Search

Azure AI Search now supports the latest GPT-5 family for semantic ranking and answer generation:

ModelUse CaseContext WindowCost Tier
gpt-5Complex reasoning, multi-step analysis256K tokensPremium
gpt-5-miniFast responses, general queries128K tokensStandard
gpt-5-nanoHigh-volume, low-latency scenarios32K tokensEconomy

Configuring GPT-5 for Semantic Search

using Azure.Search.Documents;
using Azure.Search.Documents.Indexes;
using Azure.Search.Documents.Models;

var searchClient = new SearchClient(
    new Uri("https://contoso-search.search.windows.net"),
    "products-index",
    new DefaultAzureCredential()
);

// Perform a semantic search with GPT-5-mini for answer generation
var options = new SearchOptions
{
    QueryType = SearchQueryType.Semantic,
    SemanticSearch = new SemanticSearchOptions
    {
        SemanticConfigurationName = "products-semantic-config",
        QueryCaption = new QueryCaption(QueryCaptionType.Extractive),
        QueryAnswer = new QueryAnswer(QueryAnswerType.Extractive)
        {
            Count = 3
        },
        // New in January 2026: Specify the model for answer generation
        SemanticQueryOptions = new SemanticQueryOptions
        {
            ModelName = "gpt-5-mini",  // Fast, cost-effective
            MaxTokens = 500
        }
    },
    Size = 10,
    Select = { "productId", "name", "description", "price", "category" }
};

var results = await searchClient.SearchAsync<Product>("wireless headphones with noise cancellation", options);

// Access semantic answers
foreach (var answer in results.Value.SemanticSearch.Answers)
{
    Console.WriteLine($"Answer: {answer.Text}");
    Console.WriteLine($"Confidence: {answer.Score}");
}

Knowledge Base: The New Knowledge Agent

The “Knowledge Agent” has been rebranded as “Knowledge Base” with expanded data source support:

  • Azure AI Search: Vector and hybrid search indexes
  • SharePoint Online: Documents, lists, and sites
  • OneLake: Fabric lakehouse data
  • Web Search: Grounded internet search (preview)

Creating a Multi-Source Knowledge Base

var knowledgeBase = new KnowledgeBaseDefinition
{
    Name = "enterprise-knowledge",
    Description = "Unified enterprise knowledge from multiple sources",
    Sources = new List<KnowledgeSource>
    {
        // Azure AI Search source
        new AzureSearchKnowledgeSource
        {
            SearchServiceEndpoint = new Uri("https://contoso-search.search.windows.net"),
            IndexName = "corporate-docs",
            SemanticConfigurationName = "default",
            QueryType = KnowledgeQueryType.Hybrid,  // Vector + keyword
            TopK = 5
        },
        
        // SharePoint source (new!)
        new SharePointKnowledgeSource
        {
            SiteUrl = new Uri("https://contoso.sharepoint.com/sites/engineering"),
            DocumentLibraries = new[] { "Technical Specs", "Design Documents" },
            IncludeSubfolders = true,
            FileTypes = new[] { ".docx", ".pdf", ".pptx" }
        },
        
        // OneLake source (new!)
        new OneLakeKnowledgeSource
        {
            WorkspaceName = "analytics-workspace",
            LakehouseName = "gold-layer",
            TableNames = new[] { "customer_insights", "product_metrics" }
        }
    },
    ChunkingStrategy = new ChunkingStrategy
    {
        ChunkSize = 512,
        ChunkOverlap = 50
    },
    EmbeddingModel = "text-embedding-3-large"
};

var kb = await client.CreateKnowledgeBaseAsync(knowledgeBase);
Console.WriteLine($"Knowledge Base created: {kb.Value.Id}");

AI Evaluation and Red Teaming

The Azure.AI.Evaluation package now includes comprehensive testing tools for AI safety and quality:

using Azure.AI.Evaluation;

var evaluator = new AIEvaluator(connectionString, new DefaultAzureCredential());

// Evaluate agent responses for quality and safety
var evaluationRun = await evaluator.CreateEvaluationRunAsync(new EvaluationRunDefinition
{
    Name = "customer-support-eval-v2",
    AgentId = agent.Value.Id,
    TestDatasetId = "support-test-cases",
    Metrics = new List<EvaluationMetric>
    {
        EvaluationMetric.Groundedness,      // Is the response grounded in source data?
        EvaluationMetric.Relevance,         // Is the response relevant to the query?
        EvaluationMetric.Coherence,         // Is the response well-structured?
        EvaluationMetric.Fluency,           // Is the language natural?
        EvaluationMetric.Harmfulness,       // NEW: Safety evaluation
        EvaluationMetric.PromptInjection    // NEW: Security evaluation
    },
    RedTeamingConfig = new RedTeamingConfig
    {
        Enabled = true,
        AttackCategories = new[]
        {
            "jailbreak",
            "prompt_injection",
            "data_exfiltration",
            "harmful_content_generation"
        },
        NumAttemptsPerCategory = 50
    }
});

// Wait for evaluation to complete
var result = await evaluator.GetEvaluationResultAsync(evaluationRun.Value.Id);
Console.WriteLine($"Groundedness Score: {result.Value.Metrics["groundedness"].Score}");
Console.WriteLine($"Red Team Vulnerabilities Found: {result.Value.RedTeamingResults.VulnerabilitiesFound}");
💡
BEST PRACTICE

Run red teaming evaluations on every agent update before promoting to production. Integrate into your CI/CD pipeline using the az ai evaluation run CLI command.

Real-Time Updates with Web PubSub for Socket.IO

The stable release of the Azure Functions extension enables real-time agent response streaming:

// Azure Function with Socket.IO for streaming agent responses
[Function("StreamAgentResponse")]
public async Task StreamAgentResponse(
    [SocketIOTrigger("hub", "chat")] SocketIOMessageRequest request,
    [SocketIOOutput("hub")] IAsyncCollector<SocketIOMessageOutput> output,
    FunctionContext context)
{
    var logger = context.GetLogger("StreamAgentResponse");
    var userMessage = request.Parameters[0].ToString();
    
    // Stream the agent response token by token
    await foreach (var chunk in client.StreamRunAsync(threadId, agentId, userMessage))
    {
        if (chunk.Delta?.Content != null)
        {
            await output.AddAsync(new SocketIOMessageOutput
            {
                SocketId = request.SocketId,
                EventName = "response_chunk",
                Parameters = new object[] { chunk.Delta.Content }
            });
        }
    }
    
    // Signal completion
    await output.AddAsync(new SocketIOMessageOutput
    {
        SocketId = request.SocketId,
        EventName = "response_complete"
    });
}

Migration from Semantic Kernel / AutoGen

If you’re currently using Semantic Kernel or AutoGen, here’s how Foundry Agents compares:

FeatureSemantic KernelAutoGenFoundry Agents
Memory ManagementYou implementYou implementManaged service
Tool ExecutionIn-processIn-processManaged runtime
Multi-Agent OrchestrationPlugin patternGroup chatWorkflow builder
ObservabilityCustom telemetryCustom loggingBuilt-in tracing
ScalingYou manageYou manageAuto-scaling

Key Takeaways

  • Azure.AI.Projects 1.0.0 brings managed AI agent infrastructure with the Foundry Agents Service.
  • GPT-5 model family (gpt-5, gpt-5-mini, gpt-5-nano) is now available in Azure AI Search for semantic ranking and answer generation.
  • Knowledge Base (formerly Knowledge Agent) supports SharePoint, OneLake, and Web sources alongside Azure AI Search.
  • Red teaming and evaluation tools enable systematic AI safety testing before production deployment.
  • Web PubSub for Socket.IO stable release enables real-time streaming of agent responses in serverless architectures.

Conclusion

The January 2026 Azure SDK release positions Azure as the leading enterprise AI platform. The combination of managed agent infrastructure, latest-generation models, and comprehensive evaluation tools addresses the full lifecycle of AI application development—from prototyping to production at scale. For .NET developers, the strongly-typed SDK experience and seamless integration with existing Azure services makes adopting these capabilities straightforward. Start with the Foundry Agents Service for new projects, and consider migrating existing Semantic Kernel or AutoGen implementations to benefit from the managed operational capabilities.

References


Discover more from C4: Container, Code, Cloud & Context

Subscribe to get the latest posts sent to your email.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.