Spotz is a hyperparameter optimization framework written in Scala designed to exploit Apache Spark to perform its distributed computation. A broad set of optimization algorithms have been implemented to solve for the hyperparameter values of an objective function that you specify.
The eHarmony modeling team primarily uses Spark and Scala as the base of its machine learning pipeline. Given that Spark is our distributed compute engine of choice, we have need for a robust hyperparameter optimization framework that integrates well with Spark. There are already excellent frameworks out there like Hyperopt and Optunity, written in Python, but the ideal framework that runs in Scala on top of Spark does not exist. MLlib, though providing some support for Grid Search, is not a general framework for hyperparameter tuning and does not integrate with other learners. This project's purpose is to build a simple framework that developers can integrate with Spark to fulfill their hyperparameter optimization needs.
At eHarmony, we make heavy use of Vowpal Wabbit. We use this learner so much that we feel strong integration with VW is very important. Considering that Vowpal Wabbit does not support hyperparameter optimization out of the box, we've taken steps to support it without losing generality.
Currently the following solvers have been implemented:
- Random Search
- Grid Search
- We are currently exploring other search algorithms to add
To use this as part of a maven build
<dependency>
<groupId>com.eharmony</groupId>
<artifactId>spotz-core</artifactId>
<version>1.0.0</version>
<dependency>
Using this framework consists of writing the following boilerplate code:
- Import the default definitions from the spotz
Preamble
object. Importing from a library Preamble is a Scala convention to bring in default definitions into the current scope. - Define the objective function.
- Define the space of hyperparameter values that you wish to search.
- Select the solver.
Import the default definitions from the spotz preamble object
import com.eharmony.spotz.Preamble._
Define your objective function by implementing the Objective[P, L]
trait.
import com.eharmony.spotz.objective.Objective
trait Objective[P, L] {
def apply(point: P): L
}
Note that the objective function trait is type parameterized [P, L]
for
the point and the loss. This function must simply implement the
apply(point: P): L
method of that trait. The point type parameter is an
abstract representation of the current hyperparameter values and is passed
into the trait through the apply method. The loss is the value returned from
evaluating the objective function on that point. The framework default
implementation provides a Point
class for the P
type parameter
within the Preamble
object and uses Double
as the loss value.
Again, importing the default definitions within the Preamble object
is important for this to work.
The Branin-Hoo function is shown here as a simple example.
import com.eharmony.spotz.Preamble._
import com.eharmony.spotz.objective.Objective
class BraninObjective extends Objective[Point, Double] {
val a = 1
val b = 5.1 / (4 * pow(Pi, 2))
val c = 5 / Pi
val r = 6
val s = 10
val t = 1 / (8 * Pi)
/**
* Input Domain:
* This function is usually evaluated on
* x1 ∈ [-5, 10], x2 ∈ [0, 15].
*
* Global Minimum:
* f(x*) = 0.397887 at x* = (-Pi, 12.275), (Pi, 2.275), (9.42478, 2.475)
*
* @param point
* @return a Double which is the result of evaluating the Branin function
*/
override def apply(point: Point): Double = {
val x1 = point.get[Double]("x1")
val x2 = point.get[Double]("x2")
a * pow(x2 - b*pow(x1, 2) + c*x1 - r, 2) + s*(1-t)*cos(x1) + s
}
}
Define the space of hyperparameter values that you desire to search. This space is defined differently depending on the chosen optimizer.
For random search, the space is defined by a Map where the key is a string
label and the value is a RandomSampler
trait. There are several
defined classes that implement the RandomSampler
trait. For a complete
list of available RandomSampler
functions, refer to the documentation.
val space = Map(
("x0", UniformDouble(0, 1)),
("x1", RandomChoice("foo", "bar"))
)
For grid search, the space is defined by a Map where is the key is a string
label and the value is an Iterable[T]
.
val space = Map(
("x0", Range.Double(0, 1, 0.01)),
("x1", Seq("foo", "bar"))
)
Select the algorithm of your choice to perform the optimization. Some algorithms may require defining a stopping strategy. This states when you'd like the solver to stop searching the defined hyperparameter space for the best hyperparameter values.
val stopStrategy = StopStrategy.stopAfterMaxTrials(maxTrials)
val optimizer = new SparkRandomSearch[Point, Double](sparkContext, stopStrategy)
Currently, there are few ways to specify stopping criteria:
- Stopping after maximum time duration:
StopStrategy.stopAfterMaxDuration(maxDuration)
- Stopping after maximum number of trials:
StopStrategy.stopAfterMaxTrials(maxTrials)
- Stopping after a maximum number of trials or a maximum time duration:
StopStrategy.stopAfterMaxTrialsOrMaxDuration(maxTrials, maxDuration)
Wiring it all together and using the Branin objective function defined above, here is all the necessary boilerplate to make your example work.
import com.eharmony.spotz.Preamble._
import com.eharmony.spotz.optimizer.StopStrategy
import com.eharmony.spotz.optimizer.random.SparkRandomSearch
import com.eharmony.spotz.optimizer.hyperparam.UniformDouble
import com.eharmony.spotz.examples.BraninObjective
import org.apache.spark.{SparkConf, SparkContext}
val sc = new SparkContext(new SparkConf().setAppName("Branin Function Trials"))
val space = Map(
("x1", new UniformDouble(-5, 10)),
("x2", new UniformDouble(0, 15))
)
val stopStrategy = StopStrategy.stopAfterMaxTrials(100000)
val optimizer = new SparkRandomSearch[Point, Double](sc, stopStrategy)
val result = optimizer.minimize(new BraninObjective, space)
sc.stop()