The official implementation of the Molecule Attention Transformer. ArXiv
EXAMPLE.ipynbjupyter notebook with an example of loading pretrained weights into MAT,transformer.pyfile with MAT class implementation,utils.pyfile with utils functions.
More functionality will be available soon!
Pretrained weights are available here
In this section we present the average rank across the 7 datasets from our benchmark.
-
Results for hyperparameter search budget of 500 combinations.
-
Results for hyperparameter search budget of 150 combinations.
- PyTorch 1.4
Transformer implementation is inspired by The Annotated Transformer.