We at Lexsi Labs are pleased to share TabTune, an open-source framework for working with tabular foundation models (TFMs) !
TabTune was developed to simplify the complexity inherent in modern TFMs by providing a unified TabularPipeline interface for data preprocessing, model adaptation and evaluation. With a single API, practitioners can seamlessly switch between zero‑shot inference, supervised fine‑tuning, meta-learning fine-tuning and parameter‑efficient tuning (LoRA), while leveraging automated handling of missing values, scaling and categorical encoding. Several use cases illustrate the flexibility of TabTune:
- Rapid prototyping: Zero‑shot inference allows you to obtain baseline predictions on new tabular datasets without training, making quick proof‑of‑concepts straightforward.
- Fine‑tuning: Full fine‑tuning and memory‑efficient LoRA adapters enable you to tailor models like TabPFN, Orion-MSP, Orion-BiX and more to your classification tasks, balancing performance and compute.
- Meta learning: TabTune includes meta‑learning routines for in‑context learning models, allowing fast adaptation to numerous small tasks or datasets.
- Responsible AI: Built‑in diagnostics assess calibration (ECE, MCE, Brier score) and fairness (statistical parity, equalised odds) to help you evaluate trustworthiness beyond raw accuracy.
- Extensibility: The modular design makes it straightforward to integrate custom models or preprocessing components, so researchers and developers can experiment with new architectures.
TabTune represents an exciting step toward standardizing workflows for TFMs. We invite interested professionals to explore the codebase, provide feedback and consider contributing. Your insights can help refine the toolkit and accelerate progress in this emerging area of structured data learning.
We at Lexsi Labs are pleased to share TabTune, an open-source framework for working with tabular foundation models (TFMs) !
TabTune was developed to simplify the complexity inherent in modern TFMs by providing a unified TabularPipeline interface for data preprocessing, model adaptation and evaluation. With a single API, practitioners can seamlessly switch between zero‑shot inference, supervised fine‑tuning, meta-learning fine-tuning and parameter‑efficient tuning (LoRA), while leveraging automated handling of missing values, scaling and categorical encoding. Several use cases illustrate the flexibility of TabTune:
- Rapid prototyping: Zero‑shot inference allows you to obtain baseline predictions on new tabular datasets without training, making quick proof‑of‑concepts straightforward.
- Fine‑tuning: Full fine‑tuning and memory‑efficient LoRA adapters enable you to tailor models like TabPFN, Orion-MSP, Orion-BiX and more to your classification tasks, balancing performance and compute.
- Meta learning: TabTune includes meta‑learning routines for in‑context learning models, allowing fast adaptation to numerous small tasks or datasets.
- Responsible AI: Built‑in diagnostics assess calibration (ECE, MCE, Brier score) and fairness (statistical parity, equalised odds) to help you evaluate trustworthiness beyond raw accuracy.
- Extensibility: The modular design makes it straightforward to integrate custom models or preprocessing components, so researchers and developers can experiment with new architectures.
TabTune represents an exciting step toward standardizing workflows for TFMs. We invite interested professionals to explore the codebase, provide feedback and consider contributing. Your insights can help refine the toolkit and accelerate progress in this emerging area of structured data learning.