Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-ag_news-2020-07-01-21:14/log.txt. Loading nlp dataset ag_news, split train. Loading nlp dataset ag_news, split test. Loaded dataset. Found: 4 labels: ([0, 1, 2, 3]) Loading transformers AutoModelForSequenceClassification: bert-base-uncased Tokenizing training data. (len: 120000) Tokenizing eval data (len: 7600) Loaded data and tokenized in 154.51052498817444s Training model across 4 GPUs ***** Running training ***** Num examples = 120000 Batch size = 16 Max sequence length = 128 Num steps = 37500 Num epochs = 5 Learning rate = 3e-05 Eval accuracy: 94.22368421052632% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-ag_news-2020-07-01-21:14/. Eval accuracy: 94.5921052631579% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-ag_news-2020-07-01-21:14/. Eval accuracy: 94.85526315789473% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-ag_news-2020-07-01-21:14/. Eval accuracy: 95.14473684210526% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-ag_news-2020-07-01-21:14/. Eval accuracy: 94.64473684210526% Saved tokenizer to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-ag_news-2020-07-01-21:14/. Wrote README to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-ag_news-2020-07-01-21:14/README.md. Wrote training args to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-ag_news-2020-07-01-21:14/train_args.json.