diff --git a/docs/index.html b/docs/index.html index 09b4fae..24b18c3 100644 --- a/docs/index.html +++ b/docs/index.html @@ -138,27 +138,19 @@
- Our approach augments neural radiance fields - (NeRF) by optimizing an - additional continuous volumetric deformation field that warps each observed point into a - canonical 5D NeRF. - We observe that these NeRF-like deformation fields are prone to local minima, and - propose a coarse-to-fine optimization method for coordinate-based models that allows for - more robust optimization. - By adapting principles from geometry processing and physical simulation to NeRF-like - models, we propose an elastic regularization of the deformation field that further - improves robustness. -
-- We show that Nerfies can turn casually captured selfie - photos/videos into deformable NeRF - models that allow for photorealistic renderings of the subject from arbitrary - viewpoints, which we dub "nerfies". We evaluate our method by collecting data - using a - rig with two mobile phones that take time-synchronized photos, yielding train/validation - images of the same pose at different viewpoints. We show that our method faithfully - reconstructs non-rigidly deforming scenes and reproduces unseen views with high - fidelity. + Identifying Out-of-distribution (OOD) data is becoming increasingly critical as the real-world applications + of deep learning methods expand. Post-hoc methods modify softmax scores fine-tuned on outlier data or leverage + intermediate feature layers to identify distinctive patterns between In-Distribution (ID) and OOD samples. + Other methods focus on employing diverse OOD samples to learn discrepancies between ID and OOD. + These techniques, however, are typically dependent on the quality of the outlier samples assumed. + Density-based methods explicitly model class-conditioned distributions but this requires long training + time or retraining the classifier. To tackle these issues, we introduce \textit{FlowCon}, a new density-based + OOD detection technique. Our main innovation lies in efficiently combining the properties of normalizing + flow with supervised contrastive learning, ensuring robust representation learning with tractable density estimation. + Empirical evaluation shows the enhanced performance of our method across common vision datasets such as + CIFAR-10 and CIFAR-100 pretrained on ResNet18 and WideResNet classifiers. We also perform quantitative + analysis using likelihood plots and qualitative visualization using UMAP embeddings and demonstrate the + robustness of the proposed method under various OOD contexts. Code will be open-sourced post decision.
@@ -186,66 +178,6 @@- We can also animate the scene by interpolating the deformation latent codes of two input - frames. Use the slider here to linearly interpolate between the left frame and the right - frame. -
-Start Frame
-End Frame
-- Using Nerfies, you can re-render a video from a novel - viewpoint such as a stabilized camera by playing back the training deformations. -
-