You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We report the performance on widely-used datasets of each method.
Note that we do not attempt to match the exact performance score of
the referenced papers, if they use additional tricks such as data-augmentation
or prompt-ensemble.
Table Heads Explanation
Prompt
The config of the template.
LM
The pre-trained language model we used.
Ref
The specific yaml file or tutorial scripts to
achieve the results.
* The verbalizer [{"meta":"choice1"}, {"meta":"choice2"}] is different from the verbalizer used in T5, ["True", "False"]. Superisingly, recovering the whole choice1/choice2 sentence is very easy for LM, and yield much better result (0.72 vs 0.60)