Optimization & Machine Learning

Mathematical optimization is at the heart of many machine learning algorithms.

My research aims at advancing machine intelligence via state-of-the-art optimization techniques and their innovative applications.

bsnsing: An R package for Optimization-based Decision Tree Learning

The bsnsing (pronounced 'B-sensing', for Boolean Sensing) package provides functions for building a decision tree classifier and making predictions. It solves the two-class and multi-class classification problems under the supervised learning paradigm. While building a decision tree, bsnsing uses a Boolean rule involving multiple variables to split a node. Each split rule is identified by solving an optimization model that minimizes misclassification as well as complexity. Compared to other decision tree learners that seek single-variable splits in a greedy fashion, bsnsing's approach is more holistic and produces highly interpretable and accurate trees.

This is an active, work-in-progress project.

Composite Linear-nonlinear Regression for Data Mining

Most optimization-based machine learning methods assume a known functional form between the input and output variables and then find the best-fit parameters for the assumed function. The new question is: Can we let the machines to learn more freely? In other words, can we NOT specify a functional form, but let the computer figure one out and fit the parameters?

I am approaching the problem by recursively fitting regression models in an evolutionary framework. Check out a pilot software tool here. Here is a demo of GEP in R.