Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-13:09/log.txt. Loading nlp dataset rotten_tomatoes, split train. Loading nlp dataset rotten_tomatoes, split validation. Loaded dataset. Found: 2 labels: ([0, 1]) Loading transformers AutoModelForSequenceClassification: roberta-base Tokenizing training data. (len: 8530) Tokenizing eval data (len: 1066) Loaded data and tokenized in 10.02656078338623s Training model across 4 GPUs ***** Running training ***** Num examples = 8530 Batch size = 128 Max sequence length = 128 Num steps = 660 Num epochs = 10 Learning rate = 5e-05 Eval accuracy: 89.11819887429644% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-13:09/. Eval accuracy: 90.0562851782364% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-13:09/. Eval accuracy: 89.9624765478424% Eval accuracy: 89.77485928705441% Eval accuracy: 87.99249530956847% Eval accuracy: 89.02439024390245% Eval accuracy: 89.21200750469043% Eval accuracy: 89.8686679174484% Eval accuracy: 89.58724202626641% Eval accuracy: 90.33771106941839% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-13:09/. Saved tokenizer to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-13:09/. Wrote README to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-13:09/README.md. Wrote training args to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-13:09/train_args.json.