Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-16:10/log.txt. Loading nlp dataset rotten_tomatoes, split train. Loading nlp dataset rotten_tomatoes, split validation. Loaded dataset. Found: 2 labels: ([0, 1]) Loading transformers AutoModelForSequenceClassification: roberta-base Tokenizing training data. (len: 8530) Tokenizing eval data (len: 1066) Loaded data and tokenized in 9.977334022521973s Training model across 4 GPUs ***** Running training ***** Num examples = 8530 Batch size = 64 Max sequence length = 128 Num steps = 1330 Num epochs = 10 Learning rate = 2e-05 Eval accuracy: 87.4296435272045% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-16:10/. Eval accuracy: 89.77485928705441% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-16:10/. Eval accuracy: 90.33771106941839% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-16:10/. Eval accuracy: 89.21200750469043% Eval accuracy: 89.49343339587243% Eval accuracy: 89.6810506566604% Eval accuracy: 89.9624765478424% Eval accuracy: 90.0562851782364% Eval accuracy: 89.8686679174484% Eval accuracy: 89.77485928705441% Saved tokenizer to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-16:10/. Wrote README to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-16:10/README.md. Wrote training args to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/roberta-base-rotten_tomatoes-2020-06-25-16:10/train_args.json.