Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory error #42

Open
Kirillspec opened this issue May 9, 2023 · 6 comments
Open

Out of memory error #42

Kirillspec opened this issue May 9, 2023 · 6 comments

Comments

@Kirillspec
Copy link

I have 8Gb 2060 super nvidia card, windows 10 and miniconda, and i just installed pytorch with this command:
conda install pytorch=1.13.0 torchvision pytorch-cuda=11.6 -c pytorch -c nvidia
And i have "Out of memory" error on 5th step of Jupyter notebook running script "sample_text_to_3d".
Is there any possibilities to solve this problem?

@Kabanosk
Copy link

Kabanosk commented May 9, 2023

You can try lowering the batch_size parameter, eg.

batch_size = 1

@Kirillspec
Copy link
Author

unfortunately it doesnt helps:
OutOfMemoryError: CUDA out of memory. Tried to allocate 768.00 MiB (GPU 0; 8.00 GiB total capacity; 6.14 GiB already allocated; 0 bytes free; 6.91 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@MethodJiao
Copy link

MethodJiao commented May 10, 2023

Annotate or delete this code:
render_mode = 'nerf' # you can change this to 'stf'
size = 64 # this is the size of the renders; higher values take longer to render.
cameras = create_pan_cameras(size, device)
for i, latent in enumerate(latents):
images = decode_latent_images(xm, latent, cameras, rendering_mode=render_mode)
display(gif_widget(images))

@Kirillspec
Copy link
Author

Annotate or delete this code: render_mode = 'nerf' # you can change this to 'stf' size = 64 # this is the size of the renders; higher values take longer to render. cameras = create_pan_cameras(size, device) for i, latent in enumerate(latents): images = decode_latent_images(xm, latent, cameras, rendering_mode=render_mode) display(gif_widget(images))

Thank you very much, it works!

@DenisIsDenis
Copy link

Hello! I have NVIDIA GeForce GTX 1650 with 4 GB of video memory.

I have this error:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.47 GiB already allocated; 0 bytes free; 3.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I have tried this:

  1. decrease the value of max_split_size_mb;
  2. reduce the value of batch_size to one`;
  3. remove the code proposed by MethodJiao;

But none of the above helped solve the problem.

Just in case:

  • Processor: HexaCore AMD Ryzen 5 3600, 3600 MHz (36 x 100)
  • RAM: 32 GB
  • OS: Windows 10

@Kabanosk
Copy link

I ran the model on google colab, and the memory needed to run it was around 7.6GB. I don't think there is an option now to reduce it to 4GB. If you want to try this model, this is link to my Google colab where I tested it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants