-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is the code related to Earthformer AR available? #37
Comments
Thanks for your interest! We have conducted preliminary empirical studies on Earthformer AR and found that the performance is far from satisfactory though it was able to generate predictions in better perceptual quality. It is not mature enough to include Earthformer AR in this repo now. Nevertheless, it would be an interesting future direction. As alternatives, we recommend you to refer to VideoGPT and Latent Video Transformer (LVT) from which Earthformer AR has been inspired. Both of them are open sourced (VideoGPT code, LVT code) and convenient to run. |
Got it.
Thanks a lot!
…------------------ 原始邮件 ------------------
发件人: "amazon-science/earth-forecasting-transformer" ***@***.***>;
发送时间: 2022年12月10日(星期六) 晚上11:13
***@***.***>;
***@***.******@***.***>;
主题: Re: [amazon-science/earth-forecasting-transformer] Is the code related to Earthformer AR available? (Issue #37)
Thanks for your interest!
We have conducted preliminary empirical studies on Earthformer AR and found that the performance is far from satisfactory though it was able to generate predictions in better perceptual quality. It is not mature enough to include Earthformer AR in this repo. Nevertheless, it would be an interesting future direction.
As alternatives, we recommend you to refer to VideoGPT and Latent Video Transformer (LVT) from which Earthformer AR has been inspired. Both of them are open sourced (VideoGPT code, LVT code) and convenient to run.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Thanks for your interest and thanks for the response from @gaozhihan . The team here is still improving the Earthformer model and will release Earthformer AR next year. |
Nice work !
I notice that you pretrain a VQ-VAE to compress the image sequence to a discrete latent space, and explore an auto-regressive decoder named Earthformer-AR.
I'm interesting in the training details of such a VQ-VAE and such an auto-regressive model !
So if the code related to Earthformer AR is available?
The text was updated successfully, but these errors were encountered: