list of efficient attention modules
-
Updated
Aug 23, 2021 - Python
list of efficient attention modules
Implementation of Siamese Neural Networks built upon multihead attention mechanism for text semantic similarity task.
A Faster Pytorch Implementation of Multi-Head Self-Attention
Flexible Python library providing building blocks (layers) for reproducible Transformers research (Tensorflow ✅, Pytorch 🔜, and Jax 🔜)
Implementation of "Attention is All You Need" paper
Chatbot using Tensorflow (Model is transformer) ko
Semantic segmentation is an important job in computer vision, and its applications have grown in popularity over the last decade.We grouped the publications that used various forms of segmentation in this repository. Particularly, every paper is built on a transformer.
Joint text classification on multiple levels with multiple labels, using a multi-head attention mechanism to wire two prediction tasks together.
Synthesizer Self-Attention is a very recent alternative to causal self-attention that has potential benefits by removing this dot product.
This repository contains the code for the paper "Attention Is All You Need" i.e The Transformer.
An experimental project for autonomous vehicle driving perception with steering angle prediction and semantic segmentation using a combination of UNet, attention and transformers.
The implementation of transformer as presented in the paper "Attention is all you need" from scratch.
Simple GPT with multiheaded attention for char level tokens, inspired from Andrej Karpathy's video lectures : https://github.com/karpathy/ng-video-lecture
Very simple implementation of GPT architecture using PyTorch and Jupyter.
This package is a Tensorflow2/Keras implementation for Graph Attention Network embeddings and also provides a Trainable layer for Multihead Graph Attention.
A PyTorch Implementation of PGL-SUM from "Combining Global and Local Attention with Positional Encoding for Video Summarization", Proc. IEEE ISM 2021
Annotated vanilla implementation in PyTorch of the Transformer model introduced in 'Attention Is All You Need'.
A repository for implementations of attention mechanism by PyTorch.
Official implementation of the paper "FedLSF: Federated Local Graph Learning via Specformers"
Add a description, image, and links to the multihead-attention topic page so that developers can more easily learn about it.
To associate your repository with the multihead-attention topic, visit your repo's landing page and select "manage topics."