This is the official code and data repository for MODIT.
python 3.6
pytorch 1.5.1
Cuda compilation tools, release 10.1, V10.1.243
fairseq==0.9.0
Apex
transformers==2.6.0
tokenizers
tree-sitter
Auxiliary packages are listed in requirements.txt
. Please make sure all the packages are installed
Run setup.sh
to setup the environment and download pre-processed dataset and models.
Run scripts/run-all-experiments.sh to run the experiments reported in the paper.
How MODIT performs in predicting the correct patch?
What are the contribution of different input modalities in MODIT’s performance?
What is the best strategy to encode input modalities?
A large portion of the code in this repository are borrowed from PLBART-repository, CodeXGlue-repository, and CodeBERT-repository. We cordially thank theauthors from these repositories to open-source their works.