Debunking Challenge
We are hosting a challenge to interrogate commonly-held beliefs in the deep learning community. Relevant submissions will challenge machine learning theory, common assumptions and/or folk knowledge. Participants can enter the competition by attaching an extra page to their submitted papers framing their work in light of the competition. We are offering a prize to the best entry. See below for challenge details and submission criteria.
Submission criteria
To enter the competition, include an additional page to your pdf submission that answers the following questions. Please follow the format in the style files template to facilitate reviewing. This additional page will only be included in the public workshop paper at the authors’ discretion.
-
What commonly-held position or belief are you challenging? Provide a short summary of the body of work challenged by your results. Good summaries should outline the state of the literature and be reasonable, e.g. the people working in this area will agree with your overview. You can cite sources beside published work (e.g., blogs, talks, etc).
-
How are your results in tension with this commonly-held position? Detail how your submission challenges the belief described in (1). You may cite or synthesize results (e.g. figures, derivations, etc) from the main body of your submission and/or the literature.
-
How do you expect your submission to affect future work? Perhaps the new understanding you are proposing calls for new experiments or theory in the area, or maybe it casts doubt on a line of research.
Examples of relevant submissions
We provide some (non-exhaustive) examples of work that would be strong submissions to our challenge. Submissions may challenge:
-
Experiments or conclusions drawn from experiments: A recent example comes from Besiroglu et al, who resolved an inconsistency in the original Chinchilla scaling laws by extracting data coordinates from a figure and finding a better fit for the parameters of the scaling law.
-
Beliefs motivated by theoretical results: For example, Zhang et al challenged theories suggesting that deep learning models generalize because they cannot fit random labels.
-
Common assumptions or folk-wisdom: Engels et al challenged the conjecture that the features in language model representations are 1-d subspaces.
Judging and prizes
Both for nomination and the final selection, papers will be evaluated according to:
- the pervasiveness of the beliefs they challenge, and
- the extent to which the work compellingly contributes to our understanding of those beliefs.
A panel of three organizers will select the paper which best matches these criteria to win a prize, of value $1000. The authors will be invited to give a lightning talk about their work.