The GAN models were trained using a conda environment. Below you can find how to create the same environment to train GAN models and generate sequences

Below you can find explanation of parameters that were used to train model and generate sequences.

(These variables can be found in the beginning of wgan_gp.py)

BATCH_SIZE: Batch size (how many regions will be used in each iteration). \ ITERS: Number of batch iterations to train the model. \ SEQ_LEN: Length of the input sequences. \ SEQ_DIM: Dimension of the input sequences. (4 nucleotides) \ DIM: Dimension of the model. It is used in latent space and convolutional layers. \ CRITIC_ITERS: How many training iterations will be done for Discriminator for each Generator iteration. \ LAMBDA: Hyperparameter for gradient penalty. \ loginterval: Once every N iteration the log will be saved. \ seqinterval: Once every N iteration the sample sequences will be generated. \ modelinterval: Once every N iteration the model files will be saved. \ selectedmodel: When generating sequences, the iteration number of the model you want to use. \ suffix: When generating sequences, the suffix to add to the header of the fasta regions. \ ngenerate: When generating sequences, number of sequences you want to generate relative to batchsize. Example: 1 (128 sequences will be generated if the batch size is 128) \ outputdirc: Path the to output folder. \ fastafile: Path to the fasta file to use as real enhancers

How to run the model

This will result following outputs

./models/: Folder containing saved model's weight files. \ ./samples_ACGT/: Folder containing sampled sequences during training. \ ./samples_raw/: Folder containing sampled sequences (in their raw format) during training. \ ./gen_seq/: Folder containing generated sequences after training. \ ./disc.json: Architecture file of the discriminator. \ ./gen.json: Architecture file of the generator. \ ./d_g_loss.pkl: Logged loss values during training.