# ArmorSets-GAN ##------------------------------------------- ## Please, if anything seems to not be working, reach out to me, I will try to help / fix it. ##------------------------------------------- In this directory, you can find all necessary files in regards to the PA026 course. This submission file is similar to the repository found on the project website (https://github.com/Fapannen/ArmorSets-GAN), with a few differences. - There is the full dataset contained in this submission file (As required by the course guidelines) - There is the full preprocessed dataset contained in this submission file (As required by the course guidelines) - There are all the training images contained in this submission file (For convenience / proof of work) - There is one discriminator and one generator model contained in this submission file (Also linked in the official repository. https://drive.google.com/file/d/1LqfvPT7Otp3GyM9ffEa9CIIt3fd_kt_i/view?usp=sharing) ## Folders There are a few folders which differ from the original repository. I describe all for convenience ### gan This folder contains the implementation of the gan model and functions related to the training of the model. ### hparams This folder contains the global hyperparameters definition. Many different aspects of the model can be toggled on / off just by changing values in this file. ### checkpoints This folder contains the model's checkpoints. In the submission file, I add only one discriminator and one generator for convenience, though on my computer, there are all the checkpoints that were created during training. Since the models are quite large (50MB for discriminator, 200MB for generator) I do not add all of them to the submission file, as it would greatly increase the (already big) size. ### img A copy from official repository. ### preprocessing This folder contains files related to the preprocessing of the images. The split between preprocess.py and generator.py does not make much sense, so don't be suprised. ### trained_and_generated This folder contains output examples from one of the previous models (notice the dark background, that means it's not from this course) for convenience. Feel free to have a look. ### Training_progress This folder contains the training progress of the model developed in this course. There is one image per one training epoch. All of the images are created using the **same latent vector input** - This way, it is a fair inspection of the result every epoch. There are a few "noisy" examples - ie. [653, 661] range, [946, 963], or epochs #999, #1325 etc. These are examples of the training failure of the network. As you can see, the network is sometimes able to recover from this failure mode, but during the training, the training was restarted several times from some checkpoints to recover from these failure modes. ### utils Not related to this course, there are functions to modify the model inbetween training phases - this was used in the very first model that was not developed in this course. ## inspections This folder is related to the first version of the model. In this folder, I tried to examine the influence of one element of the latent vector on the produced outputs. This method does not work, when the latent space dimension is sufficiently large. ## Using the model to generate outputs If you want to test the model to generate some outputs, you can do so. You need to install the following libraries: NumPy, OpenCV, Keras, Tensorflow. You should be fine with using the default installation of keras and tensorflow, but I have not tried generating the outputs solely on CPU for a long time, so it may be obsolete. In case of any issues, please reach out to me. To generate images, use the following command in the project directory. `python3 inspector.py --epochs=1350 --num_samples=` The script should automatically find the model file in `checkpoints` directory and produce outputs to the project directory. If you wish, you can also try to "inspect index" of a latent vector space. `python3 inspector.py --epochs=1350 --num_samples= --inspect_index=` This way, you fixate all values in the latent vector except the one you select. Then, the model will generate outputs equal to `num_samples` and you can inspect, what the index influences. (Please note that this functionality **does not work as intended** - I have observed that when the latent space is sufficiently large (> 60), this mathod does not make any sense and the outputs are not visually related. When the latent space size is lower, the indices can influence etc. the size of a shoulder - You can inspect such influence in the `inspections` folder.)