• Skip to primary navigation
  • Skip to content
  • Skip to footer
Nam Nam
  • About
  • Portfolio
  • Categories
  • Tags
  • Posts
  • Search
  • Guest Book
    Nam

    Nam

    Study Log

    • Bundang, Korea
    • GitHub
    • 이메일

    최근 포스트

    A ConvNet for the 2020s

    January 19, 2022 5 분 소요

    Abstract

    Training data-efficient image transformers & distillation through attention

    January 18, 2022 4 분 소요

    Abstract

    Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minina

    January 18, 2022 6 분 소요

    Abstract

    Masked Autoencoders Are Scalable Vision Learners

    January 14, 2022 최대 1 분 소요

    Abstract

    How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers

    January 14, 2022 4 분 소요

    Abstract

    • 이전
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 다음
    • 팔로우:
    • GitHub
    © 2023 Nam. Powered by Jekyll & Minimal Mistakes.