-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when training FLINT #32
Comments
The videos are not shared because we do not have sharing rights to the Mead
dataset. However, the videos are not needed to train flint (it's not part
of the input).
In your data config file, there should be a value `read_videos`. You can
set it to `False`. Let me know if this helps.
…On Wed, Nov 27, 2024, 10:12 虞夕言 ***@***.***> wrote:
Hi,
Thank you for your previous help and I apologize that I forgot to reply
that I had solved the problem.
Recently I started trying to train FLINT but I get the following error
reported. I don't have video_aligned data, and I didn't find a way to
download the data from the guide. from instructions. Can you please tell me
if the data is obtained by preprocessing resampled_videos using
process_mead.py.
Could you give me some help?
Could not import SPECTRE. Make sure you pull the repository with
submodules to enable SPECTRE.
Traceback (most recent call last):
File
"/home/hioukaoru/EMOTE/inferno/models/temporal/external/SpectrePreprocessor.py",
line 16, in
from spectre.src.spectre import SPECTRE
ModuleNotFoundError: No module named 'spectre'
The run will be saved to:
'/home/hioukaoru/EMOTE/inferno_apps/motion_prior/trainings/2024_11_27_14-57-21_1501762219146397196_L2lVqVae_MEADP_VAE'
Creating logger: WandbLogger
Short name len: 18
L2lVqVae_MEADP_VAE
wandb: Currently logged in as: 1277017449-yxy (use wandb login --relogin
to force relogin)
wandb: wandb version 0.18.7 is available! To upgrade, please run:
wandb: $ pip install wandb --upgrade
wandb: Tracking run with wandb version 0.10.33
wandb: Syncing run L2lVqVae_MEADP_VAE
wandb: View project at https://wandb.ai/1277017449-yxy/MotionPrior
wandb: View run at
https://wandb.ai/1277017449-yxy/MotionPrior/runs/2024_11_27_14-57-21_1501762219146397196
wandb: Run data is saved locally in
/home/hioukaoru/EMOTE/inferno_apps/motion_prior/trainings/2024_11_27_14-57-21_1501762219146397196_L2lVqVae_MEADP_VAE/wandb/run-20241127_145726-2024_11_27_14-57-21_1501762219146397196
wandb: Run wandb offline to turn off syncing.
The dataset is already processed. Loading
Loading metadata of FaceVideoDataset from:
'/home/hioukaoru/EMOTE/inferno_apps/assets/data/mead_25fps/processed/metadata.pkl'
/home/hioukaoru/EMOTE/inferno/models/temporal/TransformerMasking.py:90:
UserWarning: *floordiv* is deprecated, and its behavior will change in a
future version of pytorch. It currently rounds toward 0 (like the 'trunc'
function NOT 'floor'). This results in incorrect rounding for negative
values. To keep the current behavior, use torch.div(a, b,
rounding_mode='trunc'), or for actual floor division, use torch.div(a, b,
rounding_mode='floor').
bias = torch.arange(start=0,
end=max_seq_len).unsqueeze(1).repeat(1,period).view(-1)//(period)
creating the FLAME Decoder
/home/hioukaoru/EMOTE/inferno/models/DecaFLAME.py:93: UserWarning: To copy
construct from a tensor, it is recommended to use
sourceTensor.clone().detach() or
sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor).
torch.tensor(lmk_embeddings['dynamic_lmk_faces_idx'], dtype=torch.long))
/home/hioukaoru/EMOTE/inferno/models/DecaFLAME.py:95: UserWarning: To copy
construct from a tensor, it is recommended to use
sourceTensor.clone().detach() or
sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor).
torch.tensor(lmk_embeddings['dynamic_lmk_bary_coords'], dtype=self.dtype))
/home/hioukaoru/anaconda3/envs/work38/lib/python3.8/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:432:
UserWarning: ModelCheckpoint(save_last=True, save_top_k=None, monitor=None)
is a redundant configuration. You can save the last checkpoint with
ModelCheckpoint(save_top_k=None, monitor=None).
rank_zero_warn(
Setting val_check_interval to 1.0
After training checkpoint strategy: best
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
/home/hioukaoru/anaconda3/envs/work38/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py:423:
LightningDeprecationWarning: DataModule.setup has already been called, so
it will not be called again. In v1.6 this behavior will change to always
call DataModule.setup.
rank_zero_deprecation(
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params 0 | motion_encoder | L2lEncoderWithGaussianHead |
445 K
1 | motion_decoder | L2lDecoder | 511 K
957 K Trainable params
0 Non-trainable params
957 K Total params
3.829 Total estimated model params size (MB)
Validation sanity check: 0%| | 0/2 [00:00<?, ?it/s][ERROR] AssertionError
in MEADPseudo3dDataset dataset while retrieving sample 96, retrying with
new index 2888
In total, there has been 0 failed attempts. This number should be very
small. If it's not, check the data.
See the exception message for more details.
[ERROR] AssertionError in MEADPseudo3dDataset dataset while retrieving
sample 64, retrying with new index 2727
[ERROR] AssertionError in MEADPseudo3dDataset dataset while retrieving
sample 0, retrying with new index 3260
Traceback (most recent call last):
File "/home/hioukaoru/EMOTE/inferno/datasets/VideoDatasetBase.py", line
369, in *getitem*
return self._getitem(index)
File "/home/hioukaoru/EMOTE/inferno/datasets/MEADDataModule.py", line
1139, in _getitem
sample = super()._getitem(index)
File "/home/hioukaoru/EMOTE/inferno/datasets/VideoDatasetBase.py", line
395, in _getitem
sample, start_frame, num_read_frames, video_fps, num_frames,
num_available_frames = self._get_video(index)
File "/home/hioukaoru/EMOTE/inferno/datasets/VideoDatasetBase.py", line
605, in _get_video
assert video_path.is_file(), f"Video {video_path} does not exist"
AssertionError: Video
/home/hioukaoru/EMOTE/inferno_apps/assets/data/mead_25fps/processed/videos_aligned/M032/front/contempt/level_1/010.mp4
does not exist
In total, there has been 0 failed attempts. This number should be very
small. If it's not, check the data.
In total, there has been 0 failed attempts. This number should be very
small. If it's not, check the data.
See the exception message for more details.
See the exception message for more details.
Traceback (most recent call last):
File "/home/hioukaoru/EMOTE/inferno/datasets/VideoDatasetBase.py", line
369, in *getitem*
return self._getitem(index)
File "/home/hioukaoru/EMOTE/inferno/datasets/MEADDataModule.py", line
1139, in _getitem
sample = super()._getitem(index)
File "/home/hioukaoru/EMOTE/inferno/datasets/VideoDatasetBase.py", line
395, in _getitem
sample, start_frame, num_read_frames, video_fps, num_frames,
num_available_frames = self._get_video(index)
File "/home/hioukaoru/EMOTE/inferno/datasets/VideoDatasetBase.py", line
605, in _get_video
assert video_path.is_file(), f"Video {video_path} does not exist"
AssertionError: Video
/home/hioukaoru/EMOTE/inferno_apps/assets/data/mead_25fps/processed/videos_aligned/M032/front/angry/level_3/007.mp4
does not exist
Traceback (most recent call last):
File "/home/hioukaoru/EMOTE/inferno/datasets/VideoDatasetBase.py", line
369, in *getitem*
return self._getitem(index)
File "/home/hioukaoru/EMOTE/inferno/datasets/MEADDataModule.py", line
1139, in _getitem
sample = super()._getitem(index)
File "/home/hioukaoru/EMOTE/inferno/datasets/VideoDatasetBase.py", line
395, in _getitem
sample, start_frame, num_read_frames, video_fps, num_frames,
num_available_frames = self._get_video(index)
File "/home/hioukaoru/EMOTE/inferno/datasets/VideoDatasetBase.py", line
605, in _get_video
assert video_path.is_file(), f"Video {video_path} does not exist"
AssertionError: Video
/home/hioukaoru/EMOTE/inferno_apps/assets/data/mead_25fps/processed/videos_aligned/M032/front/angry/level_1/001.mp4
does not exist
—
Reply to this email directly, view it on GitHub
<#32>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA7MKHKYPZ6ZH5B3LZYCFYL2CWEGBAVCNFSM6AAAAABSSKU5M6VHI2DSMVQWIX3LMV43ASLTON2WKOZSGY4TONZWGUYDONA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Thank you for your timely help, but unfortunately it didn't work. The default value of read_video in the config file is False. I spent some time verifying this and then found that modifying the use_original_video value to True in VideoDatasetBase as well as the other .py files helped! Although the code seems to run successfully I'm wondering if this could lead to other problems.
|
No, this seems about right. FLINT is a prior over sequence of FLAME parameters. Videos are not needed in training. |
Hi,
Thank you for your previous help and I apologize that I forgot to reply that I had solved the problem.
Recently I started trying to train FLINT but I get the following error reported. I don't have video_aligned data, and I didn't find a way to download the data from the guide. from instructions. Can you please tell me if the data is obtained by preprocessing resampled_videos using process_mead.py.
Could you give me some help?
The text was updated successfully, but these errors were encountered: