Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What hardware configuration is required for training? #5

Open
TongkunGuan opened this issue Mar 14, 2024 · 1 comment
Open

What hardware configuration is required for training? #5

TongkunGuan opened this issue Mar 14, 2024 · 1 comment

Comments

@TongkunGuan
Copy link

TongkunGuan commented Mar 14, 2024

Great work!

I would like to consult with you about the specific details of the training process, including the type of GPU used (e.g., 3090, V100, etc.), the number of GPUs, and the duration of the training in days. Could you provide this information?

Thanks!

@ByungKwanLee
Copy link
Owner

Thanks for interest in our work!

Each training step takes two~three days with A6000x6EA on one batch. The reason we use only one batch is from that gathering image, language parts has technical issue, therefore inference code still has this issue, where the code should run with one batch theoretically.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants