-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One or two questions about the running of experimental code #5
Comments
Thanks a lot for reading our paper! (1) We provide the code to download dataset (see README -- Download Data). The CNN data needs access to Google Drive. |
Hello, author. Thank you for your reply,We plan to repeat your experiment, but at present, it is on the windows system of the local computer, so the data cannot be downloaded directly before, and I have a problem with running train.py. Do I need to climb the wall while running that program? Next, we plan to rent a Linux server online to run the program. Hello, the author. In addition, my computer has two GPUs locally, but one is the integrated display GPU and the other is the NVIDIA GPU. If these two GPUs are running code on the local computer, can they meet the basic two GPUs required by the experiment?
…------------------ 原始邮件 ------------------
发件人: "Bowen ***@***.***>;
发送时间: 2022年8月19日(星期五) 上午6:15
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [tanyuqian/progressive-generation] One or two questions about the running of experimental code (Issue #5)
Thanks a lot for reading our paper!
(1) We provide the code to download dataset (see README -- Download Data). The CNN data needs access to Google Drive.
(2) All under Linux.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Oh sorry for missing this message. |
Hello, the author. I also want to ask you about this configuration. Due to the limited hardware resources, can you change the code structure to run the bart and gpt models on a GPU. Because I don't have such high experimental conditions as 2 GPUs or 4 GPUs at present, please reply. Thank you very much |
Hello, the author. My current configuration of the local laptop is an independent video card RTX3060, and the video memory is 6GB. Do you think such a configuration has the basic conditions for completing your experiment of progressive long text generation? Thank you for your reply. Thank you very much! |
Hi! Our model is BART-Large with max_length=1024 on both encoder and decoder sides, and it takes more than 10GB GPU memory (roughly 15 GB, if I remember correctly), so I distributed the parameters on 2 1080Ti GPUS. A single 6GB GPU is not able to run the code. |
Hello, I'm honored to read your paper. I want to ask some questions. First, how can I download the experimental data set in the paper? Need to climb over the wall? Second, is the system running in the whole experiment on windows or Linux?
The text was updated successfully, but these errors were encountered: