Skip to content

Commit

Permalink
update README and data
Browse files Browse the repository at this point in the history
  • Loading branch information
xmlyqing00 committed Jun 8, 2022
1 parent b619574 commit 98c81c2
Show file tree
Hide file tree
Showing 6 changed files with 421 additions and 365 deletions.
14 changes: 3 additions & 11 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,17 +1,9 @@
.idea/

__pycache__/


vflood/
logs/

records/cp_WaterNet.pth.tar
output/
output2/
overlay/
records/
MeshTransformer/

video_module/logs/
image_module/WaterSegModels/

env/
output/
62 changes: 49 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,24 @@

This is an official PyTorch implementation for paper "V-FloodNet: A Video Segmentation System for Urban Flood Detection and Quantification".

## Environments
## 1 Environments

### 1.1 Code and packages
We developed and tested the source code under Ubuntu 18.04 and PyTorch framework.
The following packages are required to run the code.

First, a python virtual environment is recommended.
First, git clone this repository
```bash
git clone https://github.com/xmlyqing00/V-FloodNet.git
```

Second, a python virtual environment is recommended.
I use `pip` to create a virtual environment named `env` and activate it.
Then, recursively pull the submodules code.

```shell
python3 -m venv env
source env/bin/activate
python3 -m venv vflood
source vflood/bin/activate
git submodule update --init --recursive
```

Expand All @@ -23,7 +30,7 @@ In the virtual environment, install the following required packages from their o
- [Detectron2](https://github.com/facebookresearch/detectron2) for reference objects segmentation.
- [MeshTransformer](https://github.com/microsoft/MeshTransformer) for human detection and 3D mesh alignment.

We provide the corresponding installation command here
We provide the corresponding installation command here, you can replace the version number that fit your environment.

```shell
pip install torch==1.8.2+cu111 torchvision==0.9.2+cu111 torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
Expand All @@ -39,27 +46,56 @@ Then install the rest packages indicated in `requirements.txt`
pip install -r requirements.txt
```

## Usage
### 1.2 Pretrained Models

Download and extract the pretrained weights, and put them in the folder `./records/`. Weights and groundtruths are stored in [Google Drive](https://drive.google.com/file/d/1r0YmT24t4uMwi4xtSLXD5jyaMIuMzorS/view?usp=sharing).
First, run the following script to download the pretrained models of MeshTransformer
```bash
sh scripts/download_MeshTransformer_models.sh
```

### Water Image Segmentation
Put the testing images in `image_folder`, then
Second, download SMPL model `mpips_smplify_public_v2.zip` from the official website [SMPLify](http://smplify.is.tue.mpg.de/). Extract it and place the model file `basicModel_neutral_lbs_10_207_0_v1.0.0.pkl` at `./MeshTransformer/metro/modeling/data`.
<!-- - Download `MANO_RIGHT.pkl` from [MANO](https://mano.is.tue.mpg.de/), and place it at `${REPO_DIR}/metro/modeling/data`. -->
<!--
```
${REPO_DIR}
|-- metro
| |-- modeling
| | |-- data
| | | |-- basicModel_neutral_lbs_10_207_0_v1.0.0.pkl
| | | |-- MANO_RIGHT.pkl
|-- models
|-- datasets
|-- predictions
|-- README.md
|-- ...
|-- ...
``` -->
<!-- Please check [/metro/modeling/data/README.md](../metro/modeling/data/README.md) for further details. -->

Third, download the archives from [Google Drive](https://drive.google.com/drive/folders/1DURwcb_qhBeWYznTrpJ-7yGJTHxm7pxC?usp=sharing).
Extract the pretrained models for water segmentation `records.zip` and put them in the folder `./records/`.
Extract the water dataset `WaterDataset` in any path, which includes the training images and testing videos.


## 2 Usage

### 2.1 Water Image Segmentation
Put the testing images in a folder then
```shell
python test_image_seg.py \
--test_path=/path/to/image_folder --test_name=<test_name>
```
The default output folder is `output/segs/`

### Water Video Segmentation
### 2.2 Water Video Segmentation
If your input is a video, we provide a script `scripts/cvt_video_to_imgs.py` to extract frames of the video.
Put the extracted frames in `frame_folder`, then
Put the extracted frames in a folder then
```shell
python test_video_seg.py \
--test-path=/path/to/frame_folder --test-name=<test_name>
```

### Water Depth Estimation
### 2.3 Water Depth Estimation

We provide three options `stopsign`, `people`, and `ref` for `--opt` to specify three types reference objects.
```shell
Expand All @@ -71,5 +107,5 @@ For input video, to compare the estimated water level with the groundtruths in `
python cmp_hydrograph.py --test-name=<test_name>
```

## Copyright
## 3 Copyright
This paper is submitted to Elsevier Journal Computers, Environment and Urban Systems under review. The corresponding author is Xin Li (Xin Li <[email protected]>). All rights are reserved.
4 changes: 2 additions & 2 deletions est_waterlevel.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ def get_parser():
parser.add_argument('--out-dir', default='output/waterlevel',
help='A file or directory to save output results.')
parser.add_argument('--opt', type=str,
help='Estimation options.')
help='Estimation options. "people", "stopsign", or "ref"')

return parser.parse_args()

Expand All @@ -33,7 +33,7 @@ def main(args):
out_dir = os.path.join(args.out_dir, f'{args.test_name}_{args.opt}')
os.makedirs(out_dir, exist_ok=True)

if args.opt in ['skeleton', 'stopsign']:
if args.opt in ['people', 'stopsign']:
est_by_obj_detection(img_list, water_mask_list, out_dir, args.opt)
elif args.opt == 'ref':
est_by_reference(img_list, water_mask_list, out_dir, args.test_name)
Expand Down
Loading

0 comments on commit 98c81c2

Please sign in to comment.