
arize-phoenix
@arizephoenix
Open-Source AI Observability and Evaluation
ID: 1628913820046606336
https://github.com/Arize-ai/phoenix 24-02-2023 00:25:21
569 Tweet
1,1K Followers
298 Following



New visualizations to track your experiment evals and latency in arize-phoenix 👀📊 We've made it easy to clearly see how your experiments evolve over time. This has already saved me time I would've spent on manual digging. I can clearly see how performance shifts & more

According to Andrej Karpathy , we are in the Software 3.0 era, where prompts are code and LLMs are the operating systems. For this, you need an IDE that's suited for storing and testing this software and you need access to a vast variety of operating systems. Amazon Web Services Bedrock gives



From Hamel Husain 's 'Bootstrapping AI Products With Evals' talk : "Phoenix is my favorite open source tool"







As you tweak prompts and models to improve performance ... do you know what it’s costing you? 💸 arize-phoenix Cost Tracking makes it clear where your LLM spend is going - so you can catch runaway costs before they get out of hand. 1. Track token usage across models and

🔧 arize-phoenix mcp gets phoenix-support tool for Cursor / Anthropic Claude / windsurf ! You now can click the add to cursor button on phoenix and get a continuously updating MCP server config directly integrated into your IDE. @arizeai/[email protected] also comes