Google Research 3rd YouTube-8M Video Understanding Challenge 2019. Temporal localization of topics within video. International Conference on Computer Vision (ICCV) 2019.
-
Updated
Oct 14, 2019 - Jupyter Notebook
Google Research 3rd YouTube-8M Video Understanding Challenge 2019. Temporal localization of topics within video. International Conference on Computer Vision (ICCV) 2019.
RSANet: Recurrent Slice-wise Attention Network for Multiple Sclerosis Lesion Segmentation (MICCAI 2019)
An implementation of Transformer Networks using Chainer
A TensorFlow 2.0 Implementation of the Transformer: Attention Is All You Need
Using attention network to extend image quality assessment algorithms for video quality assessment
This is the official source code of our IEA/AIE 2021 paper
High Dynamic Range Image Synthesis via Attention Non-Local Network
This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention etc in Pytorch, Tensorflow, Keras
Speech recognition model for recognising Macedonian spoken language.
This work proposes a feature refined end-to-end tracking framework with a balanced performance using a high-level feature refine tracking framework. The feature refine module enhances the target feature representation power that allows the network to capture salient information to locate the target. The attention module is employed inside the fe…
A customized version of the Relational Aware Graph Attention Network for large scale EHR records.
Image captioning using beam search heuristic on top of the encoder-decoder based architecture
Sequence 2 Sequence with Attention Mechanisms in Tensorflow v2
Graphs are a general language for describing and analyzing entities with relations/interactions.
Gated-ViGAT. Code and data for our paper: N. Gkalelis, D. Daskalakis, V. Mezaris, "Gated-ViGAT: Efficient bottom-up event recognition and explanation using a new frame selection policy and gating mechanism", IEEE International Symposium on Multimedia (ISM), Naples, Italy, Dec. 2022.
Python 3 supported version for DySAT
An attention network for predicting peptide lengths (and other features) from mass spectrometry data.
Efficient Visual Tracking with Stacked Channel-Spatial Attention Learning
TF2 Deep FloorPlan Recognition using a Multi-task Network with Room-boundary-Guided Attention. Enable tensorboard, quantization, flask, tflite, docker, github actions and google colab.
Deep learning model for non-coding regulatory variants
Add a description, image, and links to the attention-network topic page so that developers can more easily learn about it.
To associate your repository with the attention-network topic, visit your repo's landing page and select "manage topics."