Pamela Fox
Microsoft
学习、联系、构建
准备好开始使用 AI 和最新技术了吗? Microsoft Reactor 提供活动、培训和社区资源,帮助开发人员、企业家和初创公司利用 AI 技术等。 快加入我们吧!
学习、联系、构建
准备好开始使用 AI 和最新技术了吗? Microsoft Reactor 提供活动、培训和社区资源,帮助开发人员、企业家和初创公司利用 AI 技术等。 快加入我们吧!
26 二月, 2026 | 6:30 下午 - 7:30 下午 (UTC) 协调世界时
主题: 代理
语言: 英语
In the third session of our Python + Agents series, we’ll focus on two essential components of building reliable agents: observability and evaluation.
We’ll begin with observability, using OpenTelemetry to capture traces, metrics, and logs from agent actions. You'll learn how to instrument your agents and use a local Aspire dashboard to identify slowdowns and failures.
From there, we’ll explore how to evaluate agent behavior using the Azure AI Evaluation SDK. You’ll see how to define evaluation criteria, run automated assessments over a set of tasks, and analyze the results to measure accuracy, helpfulness, and task success.
By the end of the session, you’ll have practical tools and workflows for monitoring, measuring, and improving your agents—so they’re not just functional, but dependable and verifiably effective.
To follow along with the live examples, sign up for a free GitHub account. If you are brand new to generative AI with Python, start with our our 9-part Python + AI series, which covers LLMs, embedding models, RAG, tool calling, MCP, and more.
主讲人
此活动属于 Python + Agents: Building AI agents and workflows with Agent Framework Series.
单击此处 访问“系列”页 可在此处查看所有即将举办的活动和点播活动。