View on GitHub

rtg

Project page for NSF grant "RTG: Understanding dynamic big data with complex structure"

Daniel Zhang

Daniel Zhang

Daniel’s research focuses on boosting, an algorithmic technique that combines weaker machine learning predictors to create a single, stronger predictor. Boosting algorithms are some of the most widely used algorithms in machine learning due to their empirical effectiveness and solid theoretical backing. However, the development of boosting theory and algorithms for situations involving partial feedback is still ongoing. Boosting has traditionally been used under “full information feedback” - that is, problem settings where the boosting algorithm knows exactly what it predicted wrong and what its prediction should have been. Under “bandit feedback” however, the boosting algorithm makes a prediction but only gets to know wheteher it made the right prediction or not. Creating boosting algorithms that can work under this more challenging feedback model is the centerpiece of Daniel’s work. Such challenging feedback models have applications in mobile health, computational advertising, and learning analytics.