MLflow 2.19.0

11 views
Skip to first unread message

Daniel Lok

unread,
Dec 11, 2024, 7:56:42 AM12/11/24
to mlflow...@googlegroups.com, eng-m...@databricks.com, ml-...@databricks.com
We are excited to announce the release of MLflow 2.19.0! This release includes a number of significant features, enhancements, and bug fixes.

Major New Features
  • ChatModel enhancements: ChatModel now adopts ChatCompletionRequest and ChatCompletionResponse as its new schema. The predict_stream interface uses ChatCompletionChunk to deliver true streaming responses. Additionally, the custom_inputs and custom_outputs fields in ChatModel now utilize AnyType, enabling support for a wider variety of data types. Note: In a future version of MLflow, ChatParams (and by extension, ChatCompletionRequest) will have the default values for n, temperature, and stream removed.
  • Tracing improvements: MLflow Tracing now supports both automatic and manual tracing for DSPy, LlamaIndex and Langchain flavors. Tracing feature is also auto-enabled for mlflow evaluation for all supported flavors.
  • New Tracing Integrations: MLflow Tracing now supports CrewAI and Anthropic, enabling a one-line, fully automated tracing experience.
  • Any Type in model signature: MLflow now supports AnyType in model signature. It can be used to host any data types that were not supported before.
Other Features:
  • [Tracking] Add update_current_trace API for adding tags to an active trace.
  • [Deployments] Update databricks deployments to support AI gateway & additional update endpoints
  • [Models] Support uv in mlflow.models.predict
  • [Models] Add type hints support including pydantic models
  • [Tracking] Add the trace.search_spans() method for searching spans within traces
Bug fixes:
  • [Tracking] Allow passing in spark connect dataframes in mlflow evaluate API
  • [Tracking] Fix mlflow.end_run inside a MLflow run context manager
  • [Scoring] Fix spark_udf conditional check on remote spark-connect client or Databricks Serverless
  • [Models] Allow changing max_workers for built-in LLM-as-a-Judge metrics
  • [Models] Support saving all langchain runnables using code-based logging
  • [Model Registry] return empty array when DatabricksSDKModelsArtifactRepository.list_artifacts is called on a file 
  • [Tracking] Stringify param values in client.log_batch()
  • [Tracking] Remove deprecated squared parameter
  • [Tracking] Fix request/response field in the search_traces output
Documentation updates:
  • [Docs] Add Ollama and Instructor examples in tracing doc
For a comprehensive list of changes, see the release change log, and check out the latest documentation on mlflow.org.
Reply all
Reply to author
Forward
0 new messages