Databricks Introduces Agent Bricks: A new product changing how enterprises develop domain-specific agents. Addresses agent dev complexity by focusing on purpose and providing feedback.
- Automated workflow includes generating task-specific evals, LLM judges, synthetic data, and searching optimization techniques.
Four-Step Automated Workflow: Starts with users declaring tasks by selecting objective and connecting data sources. Then automatic evaluation creates task-specific benchmarks.
- Proceeds to automatic optimization by searching and combining optimization techniques. The final stage ensures cost and quality.
- Agent Learning from Human Feedback (ALHF): Addresses quality challenge in steering agent behavior. Receives rich context and uses algorithms to translate guidance into optimizations.
- Test-time Adaptive Optimization (TAO): A new model tuning method requiring only unlabeled usage data. Improves quality and cost using test-time compute and reinforcement learning.
- Mosaic AI Agent Evaluation: Helps developers evaluate agentic AI app quality, cost, and latency. Identifies issues and logs metrics to MLflow Runs. Maintains env consistency.
Four Main Agent Types: Information Extraction Agent turns documents into structured fields. Knowledge Assistant Agent provides answers from enterprise data.
- Multi-Agent Supervisor enables building coordinated systems. Custom LLM Agent transforms text for specific tasks.
- Collaborative Development: CTO Matei Zaharia emphasizes joint effort across teams based on new tuning methods. Potentially changes enterprise agent dev workflows.
- Additional Resources: For more details, access Databricks' Data AI Summit session and watch a video demonstration of the platform's capabilities.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。