-
Generator : we implemented two generators respectively with LSTM and GRU (folders: LSTM_VAE and GRU_VAE).
current results are not ideal enough but you can distinguish positive results from negative ones as below.
-
positive:
- i movie is a of the best films ever ever seen it i the was it to lot of watch a the i was and seen it lot of the the the movie was the movie of not a lot of the and own is i to be a and and the it thing movie is a the the i was enjoyed to see it and have not i to the the movie the i and lot times ago to i a and movie is the i a that the the
-
negative:
- i this is a of the worst movies ever ever seen the i was it seen favorite was have a disappointed and of the and the minutes of i be the the and was a funny and and existent have the and the not are the and are know the the as the and the of the and and the and the the and are have the of and of the of the the is the that and and br
- We use the dataset IMDB(25,000 movies reviews, positive comments and negative comments account for half of each).
- We have trained a textCNN model as the target model, the accuracy of which reaches 98.4%.
- Train
- Run the file
train_with_LSTM.py
(recommended because we have trained once) ortrain_with_GRU.py
to train the model with a LSTM/GRU generator.
- Run the file
- Evaluation
- Run the file
evaluation.py
to evaluate the model from three aspects: generation speed, text quality and attack success rate.
- Run the file
- [1] Generating Natural Language Adversarial Examples on a Large Scale with Generative Models
- [2] CATEGORICAL REPARAMETERIZATION WITH GUMBEL-SOFTMAX
- Fang Chen
- Sang Yuchen
- For academic and non-commercial use only.