본문 바로가기
[논문 리뷰] (22’ECCV) Visual Prompt Tuning 📌 논문 원문 Visual Prompt TuningThe current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Trarxiv.org  📌 Overview최근 GPT 계열의 모델과 같이 대규모 데이터와 대규모 모델을 활용한 딥러닝 연구가 많아짐[ 🚨 ]엔디비아나 구글과 같이 엄청난 컴퓨팅 파워를 가지고 있는 대기업.. 2024. 6. 26.
[논문 리뷰] (17’NIPS) Transformer : Attention is all you need 📌 논문 원문 Attention Is All You NeedThe dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a newarxiv.org  📌 Overview[ 🚨 ]기존의 순환신경망, 합성곱신경망 기반 인코더-디코더 구조의 sequence transduction model순차적인 특성으로 인해 병렬화가 불가능.. 2024. 3. 9.
[논문 리뷰] CNN 서베이 논문 : Recent Advances in Convolutional Neural Networks 📌 논문 링크 Recent Advances in Convolutional Neural Networks In the last few years, deep learning has led to very good performance on a variety of problems, such as visual recognition, speech recognition and natural language processing. Among different types of deep neural networks, convolutional neural networks hav arxiv.org 📌 Overview Convolutional Neural Network의 발전에 대한 광범위한 조사를 제공함 ( ~ 2017 ) [ .. 2024. 3. 9.