-
Notifications
You must be signed in to change notification settings - Fork 7
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
b619574
commit 98c81c2
Showing
6 changed files
with
421 additions
and
365 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,17 +1,9 @@ | ||
.idea/ | ||
|
||
__pycache__/ | ||
|
||
|
||
vflood/ | ||
logs/ | ||
|
||
records/cp_WaterNet.pth.tar | ||
output/ | ||
output2/ | ||
overlay/ | ||
records/ | ||
MeshTransformer/ | ||
|
||
video_module/logs/ | ||
image_module/WaterSegModels/ | ||
|
||
env/ | ||
output/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,17 +2,24 @@ | |
|
||
This is an official PyTorch implementation for paper "V-FloodNet: A Video Segmentation System for Urban Flood Detection and Quantification". | ||
|
||
## Environments | ||
## 1 Environments | ||
|
||
### 1.1 Code and packages | ||
We developed and tested the source code under Ubuntu 18.04 and PyTorch framework. | ||
The following packages are required to run the code. | ||
|
||
First, a python virtual environment is recommended. | ||
First, git clone this repository | ||
```bash | ||
git clone https://github.com/xmlyqing00/V-FloodNet.git | ||
``` | ||
|
||
Second, a python virtual environment is recommended. | ||
I use `pip` to create a virtual environment named `env` and activate it. | ||
Then, recursively pull the submodules code. | ||
|
||
```shell | ||
python3 -m venv env | ||
source env/bin/activate | ||
python3 -m venv vflood | ||
source vflood/bin/activate | ||
git submodule update --init --recursive | ||
``` | ||
|
||
|
@@ -23,7 +30,7 @@ In the virtual environment, install the following required packages from their o | |
- [Detectron2](https://github.com/facebookresearch/detectron2) for reference objects segmentation. | ||
- [MeshTransformer](https://github.com/microsoft/MeshTransformer) for human detection and 3D mesh alignment. | ||
|
||
We provide the corresponding installation command here | ||
We provide the corresponding installation command here, you can replace the version number that fit your environment. | ||
|
||
```shell | ||
pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html | ||
|
@@ -39,27 +46,56 @@ Then install the rest packages indicated in `requirements.txt` | |
pip install -r requirements.txt | ||
``` | ||
|
||
## Usage | ||
### 1.2 Pretrained Models | ||
|
||
Download and extract the pretrained weights, and put them in the folder `./records/`. Weights and groundtruths are stored in [Google Drive](https://drive.google.com/file/d/1r0YmT24t4uMwi4xtSLXD5jyaMIuMzorS/view?usp=sharing). | ||
First, run the following script to download the pretrained models of MeshTransformer | ||
```bash | ||
sh scripts/download_MeshTransformer_models.sh | ||
``` | ||
|
||
### Water Image Segmentation | ||
Put the testing images in `image_folder`, then | ||
Second, download SMPL model `mpips_smplify_public_v2.zip` from the official website [SMPLify](http://smplify.is.tue.mpg.de/). Extract it and place the model file `basicModel_neutral_lbs_10_207_0_v1.0.0.pkl` at `./MeshTransformer/metro/modeling/data`. | ||
<!-- - Download `MANO_RIGHT.pkl` from [MANO](https://mano.is.tue.mpg.de/), and place it at `${REPO_DIR}/metro/modeling/data`. --> | ||
<!-- | ||
``` | ||
${REPO_DIR} | ||
|-- metro | ||
| |-- modeling | ||
| | |-- data | ||
| | | |-- basicModel_neutral_lbs_10_207_0_v1.0.0.pkl | ||
| | | |-- MANO_RIGHT.pkl | ||
|-- models | ||
|-- datasets | ||
|-- predictions | ||
|-- README.md | ||
|-- ... | ||
|-- ... | ||
``` --> | ||
<!-- Please check [/metro/modeling/data/README.md](../metro/modeling/data/README.md) for further details. --> | ||
|
||
Third, download the archives from [Google Drive](https://drive.google.com/drive/folders/1DURwcb_qhBeWYznTrpJ-7yGJTHxm7pxC?usp=sharing). | ||
Extract the pretrained models for water segmentation `records.zip` and put them in the folder `./records/`. | ||
Extract the water dataset `WaterDataset` in any path, which includes the training images and testing videos. | ||
|
||
|
||
## 2 Usage | ||
|
||
### 2.1 Water Image Segmentation | ||
Put the testing images in a folder then | ||
```shell | ||
python test_image_seg.py \ | ||
--test_path=/path/to/image_folder --test_name=<test_name> | ||
``` | ||
The default output folder is `output/segs/` | ||
|
||
### Water Video Segmentation | ||
### 2.2 Water Video Segmentation | ||
If your input is a video, we provide a script `scripts/cvt_video_to_imgs.py` to extract frames of the video. | ||
Put the extracted frames in `frame_folder`, then | ||
Put the extracted frames in a folder then | ||
```shell | ||
python test_video_seg.py \ | ||
--test-path=/path/to/frame_folder --test-name=<test_name> | ||
``` | ||
|
||
### Water Depth Estimation | ||
### 2.3 Water Depth Estimation | ||
|
||
We provide three options `stopsign`, `people`, and `ref` for `--opt` to specify three types reference objects. | ||
```shell | ||
|
@@ -71,5 +107,5 @@ For input video, to compare the estimated water level with the groundtruths in ` | |
python cmp_hydrograph.py --test-name=<test_name> | ||
``` | ||
|
||
## Copyright | ||
## 3 Copyright | ||
This paper is submitted to Elsevier Journal Computers, Environment and Urban Systems under review. The corresponding author is Xin Li (Xin Li <[email protected]>). All rights are reserved. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.