In this final project, you are going to be part of an exciting machine learning competition. Consider a company that runs a platform similar to Coursera. One thing that any instructor may want to know is to predict whether a student will be a dropout of some class or not—after all, the dropout rate in online classes is often very high. The prediction should be based on the activities of the students within the classes. Now, having collected some data from the platform, the board of directors of the company decides to hold a competition and make the problem of dropout prediction open to experts like you. To win the prize, you need to fight for the leading positions on the score board. Then, you need to submit a comprehensive report that describes not only the recommended approaches, but also the reasoning behind your recommendations. Well, let’s get started!
In this project, you are going to play with data from a Massive Open Online Course (MOOC) platform, which contains information of many courses and students. Your goal is to predict whether a student will drop a course that she/he enrolled in. We provide you all the logs of each enrollment within the first 30 days of the course. Then, if the student has no logs within the following 10 days, i.e. the 31st-40th days from the start date of the course, we label him/her as a dropout. (Note that the 31st day starts from 00:00:00)
The problem is formalized as a “soft” binary classification problem, where the goal is to predict the dropout “truth” of each (student, class) pair accurately. We will have two tracks of competition. The details of the tracks, which would differ by evaluation criteria (i.e. error functions), will be announced later. The data will be divided to the training set and the test set. For the test set, the dropout “truth” will be hidden. Details are in the Data section.
The data sets are processed from the KDDCup 2015 data, which aims for a similar goal with a different evaluation criterion. To maximize the level of fairness, you are not allowed to download the original KDDCup 2015 data at any time. But you are welcomed to go check the descriptions of the competition.
There are 2 tracks in this competition, please see their Evaluation section.
You are asked by the board to study at least THREE machine learning approaches using the training set above. Then, you should make a comparison of those approaches according to some different perspectives, such as efficiency, scalability, popularity, and interpretability. In addition, you need to recommend THE BEST ONE of those approaches as your final recommendation for each track and provide the "cons and pros" of the choice.
The survey report should be no more than SIX A4 pages with readable font sizes. The most important criterion for evaluating your report is replicability. Thus, in addition to the outlines above, you should also describe how you pre-process your data, with emphasis on the features you build, especially those you find useful; introduce the approaches you tried and provide specific references, especially for those approaches that we didn't cover in class; list your experimental settings and the parameters you used (or chose) clearly. Other criteria for evaluating your survey report would include, but are not limited to, clarity, strength of your reasoning, "correctness" in using machine learning techniques, the work loads of team members, and properness of citations.
Our sincere suggestion: Think of your TAs as your boss who want to be convinced by your report.
For grading purposes, a minor but required part in your survey report for a two- or three-people team (see the rules below) is how you balance your work loads.
We will limit each team with 6 submissions per day for each track to check your performance. Use your submissions wisely—you do not want to leave the board with a bad impression that you just want to “query” or “overfit” the test examples. After submitting, there will be a score board showing the test error on a random half of the data set. The “hidden” test error on the other half will eventually be used to evaluate your performance.
The competition ends at noon on 01/10/2016. We’ll have a mini-ceremony to honor the best team(s) on 01/11/2016. The competition site will continue to be open until the due day of the report.
Report: Please upload one report per team electronically on CEIBA. You do not need to submit a hard-copy. The report is due at noon on 01/20/2016.
Teams: By default, you are asked to work as a team of size THREE. A one-person or two-people team is allowed only if you are willing to be as good as a three-people team. It is expected that all team members share balanced work loads. Any form of unfairness, such as the intention to cover other members’ work, is considered a violation of the honesty policy and will cause some or all members to receive zero or negative score.
Algorithms: You can use any algorithms, regardless of whether they were taught in class.
Packages: You can use any software packages for the purpose of experiments, but please provide proper references in your report for replicability.
Source Code: You do not need to upload your source code for the final project. Nevertheless, please keep your source code until 02/28/2016 for the graders’ possible inspections.
Grade: The final project is worth 600 points. That is, it is equivalent to three usual homework sets. At least 540 of them would be reserved for the report. The other 60 may depend on some minor criteria such as your competition results, your discussions on the boards, your work loads, etc.
Collaboration: The general collaboration policy applies. In addition to the competitions, we still encourage collaborations and discussions between different teams.
Data Usage: You can use only the data sets provided in class for your experiments, and you should use the data sets properly. Using any tricks to query the labels of the test set is strictly prohibited.