Hello ๐, I'm Tae-min Kim, whose dream is to become an AI researcher. Profile
- KAIST IC Lab Intern, KAIST IC Lab / 2024.07 ~
- Alethio ML Researcher Intern, Alethio Co., Ltd. / 2023.08 ~ 2023.12
- LG Aimers 3th, LG Aimers / 2023.07 ~ 2023.09
- NLP AI Developer 5th, BoostCamp AI Tech / 2023.03 ~ 2023.08
- Hansung University Undergraduate Researcher : Visual Intelligence Lab. / 2023.03 ~ 2023.04
- Hansung University Undergraduate Researcher : Marusys edu Co., Ltd. / 2022.06 ~ 2023.03
- Hansung University - Hansung Bllossom GPT Serving, Demo / 2024.05. ~ 2024.06.
- Visualize Diffusion Cross Attention Map for Text-to-Image, Demo / 2023.10. ~ 2023.12.
- Boostcamp AI Tech - LawBot(Legal-GPT) Project Serving, Final Project / 2023.07. ~ 2023.08.
- Boostcamp AI Tech - Open-Domain Question Answering, ODQA / 2023.06.07 ~ 2023.06.22
- Boostcamp AI Tech - Topic Classification, TC / 2023.05.24 ~ 2023.06.01
- Boostcamp AI Tech - Relation Extraction, RE / 2023.05.03 ~ 2023.05.18
- 2023๋ K-๋์งํธ ํธ๋ ์ด๋ ํด์ปคํค ์์ ์ง์ถ / 2023.05.17
- Boostcamp AI Tech - Semantic Text Similarity, STS / 2023.04.12 ~ 2023.04.20
- Visual Intelligence Lab, Food Object Detection Project / 2023.03. ~ 2023.04.
- 2023 ๊ต์๊ทธ๋ฃน AI ์ฑ๋ฆฐ์ง ๋ณธ์ ์ง์ถ, Dacon / 2023.01.30 ~ 2023.02.13
- TV ์ ๋์ ์ ์ด ๋น๋์ค ์ธ์ AI ๊ฒฝ์ง๋ํ, Dacon / 2023.01.02 ~ 2023.02.06
- 2022 ๊ตญ๋ฐฉ AI ๊ฒฝ์ง๋ํ ๋ณธ์ ์ง์ถ, AI Connect / 2022-09-30 ~ 2022-12-01
- Marusys edu Co., Ltd. ์ ์์ฉ AI TeachableMachine ๊ฐ๋ฐ, Marusys edu Co., Ltd. / 2022-06. ~ 2023-03.
- kfkas/Hansung-Llama-3-8B / Huggingface
- kfkas/Legal-Llama-2-ko-7b-Chatโ๏ธ๐ฆ / Huggingface
- YoonSeul/LawBot-level-3-KuLLM-5.8B-tae-2epochโ๏ธ / Huggingface
- kfkas/Llama-2-ko-7b-Chat๐ฆ / Huggingface
- kfkas/legal-question-filter-koelectra / Huggingface
- kfkas/t5-large-korean-P2G / Huggingface
- kfkas/t5-large-korean-news-title-klue-ynat / Huggingface
- kfkas/RoBERTa-large-Detection-G2P / Huggingface
- SLIP: Self-supervision meets Language-Image Pre-training / SLIP
- BEiT: BERT Pre-Training of Image Transformers / BEiT
- FINE-GRAINED INTERACTIVE LANGUAGE- IMAGE PRE-TRAINING / FILIP
- Align before Fuse: Vision and Language Representation Learning with Momentum Distillation / ALBEF
- Visual Instruction Tuning / LLaVA
- VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text / VATT
- One Embedding Space To Bind Them ALL / ImageBind
- HyperNetworks for Fast Personalization of Text-to-Image Models / HyperDreambooth
- A Successor to Transformer for Large Language Models / RetNet
- Language Models are Unsupervised Multitask Learners / GPT-2
- Learning Transferable Visual Models From Natural Language Supervision / CLIP
- Improving Language Understanding by Generative Pre-Training / GPT-1
- Pre-training of Deep Bidirectional Transformers for Language Understanding / BERT
- Attention is all you need / Transformer
- Effective Approaches to Attention-based Neural Machine Translation / Seq2Seq with Attention
- Sequence to Sequence Learning with Neural Networks / Seq2Seq
- TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models / TrOCR
- SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers / Segformer
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale / Vision Transformer
- Involution: Inverting the Inherence of Convolution for Visual Recognition / Involution
- Micro-Batch Training with Batch-Channel Normalization and Weight Standardization / Weight Standardization
- Big Transfer (BiT): General Visual Representation Learning / BiT
Baekjoon Tier