Update README.md
Browse files- .gitattributes +1 -0
- PID.png +3 -0
- README.md +81 -3
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
PID.png filter=lfs diff=lfs merge=lfs -text
|
PID.png
ADDED
|
Git LFS Details
|
README.md
CHANGED
|
@@ -1,3 +1,81 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# PID: Physics-Informed Diffusion Model for Infrared Image Generation
|
| 2 |
+
|
| 3 |
+
<img src="PID.png" alt="PID" style="zoom:50%;" />
|
| 4 |
+
|
| 5 |
+
## Update
|
| 6 |
+
|
| 7 |
+
* 2025/05 The paper is accepted by Pattern Recognition: https://doi.org/10.1016/j.patcog.2025.111816
|
| 8 |
+
* We have released our code.
|
| 9 |
+
|
| 10 |
+
## Environment
|
| 11 |
+
|
| 12 |
+
It is recommended to install the environment with environment.yaml.
|
| 13 |
+
|
| 14 |
+
```bash
|
| 15 |
+
conda env create --file=environment.yaml
|
| 16 |
+
```
|
| 17 |
+
|
| 18 |
+
## Datasets
|
| 19 |
+
|
| 20 |
+
Download **KAIST** dataset from https://github.com/SoonminHwang/rgbt-ped-detection
|
| 21 |
+
|
| 22 |
+
Download **FLIRv1** dataset from https://www.flir.com/oem/adas/adas-dataset-form/
|
| 23 |
+
|
| 24 |
+
We adopt the official dataset split in our experiments.
|
| 25 |
+
|
| 26 |
+
## Checkpoint
|
| 27 |
+
|
| 28 |
+
VQGAN can be downloaded from https://ommer-lab.com/files/latent-diffusion/vq-f8.zip (Other GAN models can be downloaded from https://github.com/CompVis/latent-diffusion).
|
| 29 |
+
|
| 30 |
+
TeVNet and PID heckpoints can be found in [HuggingFace](https://huggingface.co/FerrisMao/PID).
|
| 31 |
+
|
| 32 |
+
## Evaluation
|
| 33 |
+
|
| 34 |
+
Use the shellscript to evaluate. `indir` is the input directory of visible RGB images, `outdir` is the output directory of translated infrared images, `config` is the chosen config in `configs/latent-diffusion/config.yaml`. We prepare some RGB images in `dataset/KAIST` for quick evaluation.
|
| 35 |
+
|
| 36 |
+
```sh
|
| 37 |
+
bash run_test_kaist512_vqf8.sh
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
## Train
|
| 41 |
+
|
| 42 |
+
### Dataset preparation
|
| 43 |
+
|
| 44 |
+
Prepare corresponding RGB and infrared images with same names in two directories.
|
| 45 |
+
|
| 46 |
+
### Stage 1: Train TeVNet
|
| 47 |
+
|
| 48 |
+
```bash
|
| 49 |
+
cd TeVNet
|
| 50 |
+
bash shell/train.sh
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
### Stage 2: Train PID
|
| 54 |
+
|
| 55 |
+
To accelerate training, we recommend using our pretrained model.
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
bash shell/run_train_kaist512_vqf8.sh
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
## Acknowledgements
|
| 62 |
+
|
| 63 |
+
Our code is built upon [LDM](https://github.com/CompVis/latent-diffusion) and [HADAR](https://github.com/FanglinBao/HADAR). We thank the authors for their excellent work.
|
| 64 |
+
|
| 65 |
+
## Citation
|
| 66 |
+
|
| 67 |
+
If you find this work is helpful in your research, please consider citing our paper:
|
| 68 |
+
|
| 69 |
+
```
|
| 70 |
+
@article{mao2026pid,
|
| 71 |
+
title={PID: physics-informed diffusion model for infrared image generation},
|
| 72 |
+
author={Mao, Fangyuan and Mei, Jilin and Lu, Shun and Liu, Fuyang and Chen, Liang and Zhao, Fangzhou and Hu, Yu},
|
| 73 |
+
journal={Pattern Recognition},
|
| 74 |
+
volume={169},
|
| 75 |
+
pages={111816},
|
| 76 |
+
year={2026},
|
| 77 |
+
publisher={Elsevier}
|
| 78 |
+
}
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
If you have any question, feel free to contact [email protected].
|