Changelog
Latest release updates from the Langfuse team. Check out our Roadmap to see what's next.
All posts
Improved In-Product Onboarding
New onboarding screens that introduce the capabilities of Langfuse features that are not yet used within a project.
February 26, 2025
Claude 3.7 Sonnet support
Langfuse says hello to Anthropic Claude 3.7 Sonnet with day 1 support for both the LLM playground, LLM-as-a-judge and cost tracking.
February 25, 2025
Google AI Studio support
Langfuse now supports access to Gemini models via Google AI Studio.
February 25, 2025
Execute LLM-as-a-judge evaluations on existing data
You can now choose to execute LLM-as-a-judge evaluations on existing, new, or all data.
February 20, 2025
Model Context Protocol Prompt Server
Use Langfuse prompts in your LLM Agent systems, Claude Desktop, and Cursor and other MCP Clients via the new MCP Server.
February 16, 2025
OpenTelemetry Tracing Support
Push your OpenTelemetry Spans to Langfuse
February 14, 2025
Open Source LLMOps Stack
Together with LiteLLM, we launch the OSS LLMOps Stack which is a popular, well-integrated and battle-tested LLMOps stack that can easily be self-hosted and extended.
February 14, 2025
Graph view for LangGraph traces
Follow your LangGraph agent execution on a graph view
February 14, 2025
Select all items in tables with a single click
Quickly select all items subject to your filters and perform actions on them.
February 11, 2025
Trace UI filtered view by log-level
Focus on the most relevant elements of your trace by using log-level filters
February 10, 2025
Gemini 2.0 support for playground and cost tracking
Langfuse says hello to Gemini 2.0 Flash, Gemini 2.0 Flash-Lite, and Gemini 2.0 Pro. Support added for both the LLM playground and cost tracking.
February 6, 2025
Use JsonPath to select from Input, Output, or Metadata values in LLM as a Judge
Use JsonPath to select from Input, Output, or Metadata in LLM as a Judge for more precise evaluation prompts.
February 6, 2025
Typed public API client in SDKs
Access all of Langfuse public API resources via a typed client in our SDKs
February 5, 2025
OpenAI o3-mini support for playground and cost tracking
Two hours ago, OpenAI released the latest version of their o3-mini model. Langfuse now supports this model in both the LLM playground and cost tracking.
January 31, 2025
Cmd+K Menu
Quickly access different Langfuse features and switch between projects via the Cmd+K menu.
January 30, 2025
Project-level Data Retention Policies
Automatically remove event data outside of a specified retention period.
January 30, 2025
New API Reference
Langfuse now has a new API Reference with interactive examples.
January 30, 2025
New Prompt Editor
Complex prompts are easier to edit with our new prompt editor, which supports variable highlighting.
January 30, 2025
JS/TS SDK supports trace sampling
Trace sampling is now supported in the JS SDK and integrations.
January 30, 2025
Commit Messages on Prompt Versions
Document your updates to prompts
January 28, 2025
Dataset CSV upload
Create your Datasets in seconds
January 27, 2025
Configure additional HTTP headers for LLM API keys
Configure additional HTTP headers for LLM API keys to include custom headers in requests to provider endpoints, particularly useful for OpenAI-compliant LLM proxies.
January 23, 2025
Track changes between prompt versions
See a detailed comparison of changes between prompt versions. Optionally, you can also review changes before creating a new prompt version.
January 22, 2025
Audit Logs
As part of our ongoing effort to improve Langfuse for larger teams, we are excited to announce that audit logs are now available on Enterprise/Teams plans.
January 21, 2025
PostHog integration is now GA
After running for some time in public beta, we are excited to announce that the PostHog integration is now fully available. It runs on a hourly basis and is available to all Langfuse plans (cloud) and Pro/Enterprise (self-hosted).
December 21, 2024
Improved cost tracking
Langfuse now supports cost tracking all usage types such as cached tokens, audio tokens, reasoning tokens, etc.
December 20, 2024
Langfuse v3 stable release
Langfuse v3 is now stable and ready for production use when self-hosting Langfuse, including many scalability and architectural improvements.
December 9, 2024
Extensive example notebook for JS/TS SDK
To make it easier to get started with the Langfuse JS/TS SDK, we've created an extensive end-to-end example notebook. It includes a general introduction and examples for Anthropic, OpenAI SDK and LangChain.
December 4, 2024
SSO with GitHub Enterprise and Keycloak
Langfuse now supports GitHub Enterprise and Keycloak as SSO providers on Langfuse Cloud and self-hosted instances
December 3, 2024
New documentation for Google Vertex AI and Gemini tracing
Comprehensive guides for tracing Google Vertex AI and Gemini models with Langfuse
December 2, 2024
Google Vertex AI support for LLM Playground and Evaluations incl. Gemini models
Langfuse now supports Google Vertex AI incl. Gemini models for LLM Playground and Evaluations.
November 28, 2024
Launch Week 2 🚀
Prompt Experiments on Datasets with LLM-as-a-Judge Evaluations
Move fast on prompts without breaking things! Run experiments on Datasets and directly compare evaluation results side-by-side. Experimentation speeds up the feedback loop when working on prompts and prevents regressions when making rapid changes.
November 22, 2024
Launch Week 2 🚀
All new Datasets, Experimentation and Evaluation documentation
We've completely rebuilt the documentation for Datasets and Evals to make it easier to get started with offline evaluation. To celebrate Launch Week, we've also summarized all the documentation improvements we've made over the past year.
November 21, 2024
Launch Week 2 🚀
Full multi-modal support, including audio, images, and attachments
Add multi-modal attachments to LLM traces in Langfuse to observe, and view them within the Langfuse UI.
November 20, 2024
Launch Week 2 🚀
LLM-as-a-judge Evaluators for Dataset Experiments
Introducing support for managed LLM-as-a-judge evaluators for dataset experiments.
November 19, 2024
Launch Week 2 🚀
Dataset Run Comparison View
After running experiments on datasets, you can now compare results side-by-side, view metrics, and peek into details of each dataset item across runs.
November 18, 2024
Launch Week 2 🚀
llms.txt
Easily use the Langfuse documentation in Cursor and other LLM editors via the new llms.txt file.
November 17, 2024
Launch Week 2 🚀
Prompt Management for Vercel AI SDK
Langfuse Prompt Management now integrates natively with the Vercel AI SDK. Version and release prompts in Langfuse, use them via Vercel AI SDK, monitor metrics in Langfuse.
November 17, 2024
New Sidebar
We've updated the sidebar in the Langfuse UI collapsible. This change provides more screen space for viewing traces and other complex pages.
November 1, 2024
Event input and output masking
Configure SDK-side masking to redact sensitive information from inputs and outputs sent to the Langfuse server.
October 25, 2024
Amazon Bedrock support for LLM Playground and Evaluations
Langfuse now supports Amazon Bedrock for LLM Playground and Evaluations.
October 11, 2024
Langfuse LLM-as-a-judge now supports any (tool-calling) LLM
Tool calling makes Langfuse Evals reliable. Previously, only OpenAI models were supported. With this update, you can use any tool-calling LLM when setting up an LLM-as-a-judge evaluator.
October 11, 2024
Annotation Queues
Manage your annotation tasks with ease using our new workflow tooling. Create queues, add traces to them, and get a simple UI to review and label LLM application traces in Langfuse.
October 10, 2024
Aggregated and Color-coded Latency and Costs on Traces
Large traces can be hard to read. We've added aggregated latency and cost information to the every span level to make it easier to spot outliers and debug the LLM application.
October 10, 2024
Documentation now integrates with GitHub Discussions (Support and Feature Requests)
See previews of all commonly asked questions and feature requests for each page across the Langfuse documentation.
September 23, 2024
Langfuse on AWS Marketplace
Langfuse is now available on AWS Marketplace, making it the premier choice for AWS customers looking for a powerful LLM observability, analytics and evaluation platform.
September 20, 2024
DSPy Integration Example
Langfuse now provides an integration example for DSPy, offering observability for this powerful framework that optimizes language model prompts and weights.
September 20, 2024
Link prompts to Langchain executions
Prompt management just got more powerful for Langchain users by linking Langfuse prompts to Langchain executions.
September 17, 2024
Custom Base Path for Self-Hosted Deployments
Deploy Langfuse behind a custom base path for more flexible self-hosting configurations.
September 13, 2024
Token/cost tracking for OpenAI's o1 models
Day 1 support for OpenAI's o1 models including tracking token counts and USD spend.
September 13, 2024