Lumigator is an open-source platform developed by Mozilla.ai to help users select the most suitable language model for their specific needs. Currently, Lumigator supports the evaluation of summarization tasks using sequence-to-sequence models such as BART and BERT, as well as causal models like GPT and Mistral. We plan to expand support to additional machine learning tasks and use cases in the future.
To learn more about Lumigator's features and capabilities, see the documentation, or get started with the example notebook for a platform API walkthrough.
Note
Lumigator is in the early stages of development. It is missing important features and documentation. You should expect breaking changes in the core interfaces and configuration structures as development continues.
As more organizations turn to AI for solutions, they face the challenge of selecting the best model from an ever-growing list of options. The AI landscape is evolving rapidly, with twice as many new models released in 2023 compared to the previous year. Yet, in spite of the wealth of metrics available, there’s still no standard way to compare these models.
The 2024 AI Index Report highlighted that AI evaluation tools aren’t (yet) keeping up with the pace of development, making it harder for developers and businesses to make informed choices. Without a clear single method for comparing models, many teams end up using suboptimal solutions, or just choosing models based on hype, slowing down product progress and innovation.
With Lumigator MVP, Mozilla.ai aims to make model selection transparent, efficient, and empowering. Lumigator provides a framework for comparing LLMs, using task-specific metrics to evaluate how well a model fits your project’s needs. With Lumigator, we want to ensure that you’re not just picking a model—you’re picking the right model for your use case.
The simplest way to set up Lumigator is to deploy it locally using Docker Compose. To this end, you need to have the following prerequisites installed on your machine:
- A working installation of Docker.
- On a Mac, you need Docker Desktop
4.3
or later and docker-compose1.28
or later. - On Linux, you need to follow the post-installation steps.
- On a Mac, you need Docker Desktop
- The system Python; no version manager, such as pyenv, should be active.
You can run and develop Lumigator locally using Docker Compose. This creates three container services networked together to make up all the components of the Lumigator application:
localstack
: Local storage for datasets that mimics S3-API compatible functionality.backend
: Lumigator’s FastAPI REST API.ray
: A Ray cluster for submitting several types of jobs.
Note
Lumigator requires an SQL database to hold metadata for datasets and jobs. The local deployment uses SQLite for this purpose.
To start Lumigator locally, follow these steps:
-
Clone the Lumigator repository:
git clone git@github.com:mozilla-ai/lumigator.git
-
Navigate to the repository root directory:
cd lumigator
-
Start Lumigator using Docker Compose:
make start-lumigator
To verify that Lumigator is running, open a web browser and navigate to
http://localhost:8000
. You should get the following response:
{"Hello": "Lumigator!🐊"}
Despite the fact this is a local setup, it lends itself to more distributed scenarios. For instance,
one could provide different AWS_*
environment variables to the backend container to connect to any
provider’s S3-compatible service, instead of localstack. Similarly, one could provide a different
RAY_HEAD_NODE_HOST
to move compute to a remote ray cluster, and so on. See
here
for an example of how to do this, and see the
operational guides in the
documentation for more deployment options.
Now that Lumigator is running, you can start using it. The platform provides a REST API that allows you to interact with the system. Run the example notebook for a quick walkthrough.
To stop the containers you started using Docker Compose, simply run the following command:
make stop-lumigator
For the complete Lumigator documentation, visit the docs page.
For contribution guidelines, see the CONTRIBUTING.md file.
To report a bug or request a feature, please open a GitHub issue. Be sure to check if someone else has already created an issue for the same topic.