This is the official implementation of the paper:
- Mitigating Non-Representative Prototypes and Representation Bias in Few-Shot Continual Relation Extraction
- [2025\05\16] Our paper has been accepted to ACL 2025 Main Conference!
-
Navigate to the Bert directory:
cd Bert
-
Install necessary packages:
!pip install transformers==4.40.0 torch==2.3.0 scikit-learn ==1.4.2 nltk==3.8.1 retry==0.9.2 !pip install flash-attn --no-build-isolation !pip install pytorch_metric_learning
-
Run the TACRED training:
python train_tacred_final.py --task_name Tacred --num_k 5
-
Run the FewRel training:
python train_fewrel_final.py --task_name FewRel --num_k 5
-
Navigate to the
Llama2
,Llama3
,Mistral
in 'LLMEs'directory: -
Install necessary packages:
!pip install transformers==4.40.0 torch==2.3.0 scikit-learn ==1.4.2 nltk==3.8.1 retry==0.9.2 !pip install llm2vec==0.2.2 !pip install flash-attn --no-build-isolation !pip install pytorch_metric_learning
-
Log in to Hugging Face:
!huggingface-cli login --token your_huggingface_token_to_access_model
- Run the TACRED training:
python train_tacred_final.py --task_name Tacred --num_k 5
- Run the FewRel training:
python train_fewrel_final.py --task_name FewRel --num_k 5
- CPL results with Llama2 and Mistral are evaluated under
float32
for fair comparison with Quyen et al. - All other experiments use
bf16
. - Requires Python ≥3.8.
- For CPL, add your OpenAI API key to
config.ini
.