[Objective] This paper wants to solve the problem that is the difficulty of constructing the label mapping of relation extraction based on prompt learning when the labeled data is scarce.
[Methods] This method injects relation semantics encoded into prompt templates, conducts data augmentation for prompt input through prompt ensemble and extract important features during prototype building through instance-level attention mechanism.
[Results] We conducted experiments on the FewRel dataset. The accuracy of our method outperformed the baseline models by 2.13, 0.55, 1.4, and 2.91 percentage points in four different few-shot testing scenarios, respectively.
[Limitations] There is no learnable virtual prompt used in prompt template, so there is still room for optimization in answer word representation.
[Conclusions] The proposed method effectively alleviates the problem of limited prototype construction information and insufficient accuracy in few-shot scenarios, thereby enhancing the model's accuracy in the task of few-shot relation extraction.