Interpretable End-to-end Urban Autonomous Driving with Latent Deep Reinforcement Learning
-
Updated
Mar 24, 2023 - Python
Interpretable End-to-end Urban Autonomous Driving with Latent Deep Reinforcement Learning
Implementation of "Disentangled Representation Learning for Non-Parallel Text Style Transfer(ACL 2019)" in Pytorch
Graph Representation Analysis for Connected Embeddings
This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help of a corpus of examples. For more details, please read our NeurIPS 2021 paper: 'Explaining Latent Representations with a Corpus of Examples'.
Code for our paper -- Hyperprior Induced Unsupervised Disentanglement of Latent Representations (AAAI 2019)
ACM CHIL 2020: "Survival Cluster Analysis"
ICCV23 "Householder Projector for Unsupervised Latent Semantics Discovery"
Tripod is a tool/ML model for computing latent representations for large sequences
Variational Interpretable Concept Embeddings
Code associated with the paper "Prior Image-Constrained Reconstruction using Style-Based Generative Models" accepted to ICML 2021.
Simple Pytorch Implementation of BYOL: Bootstrap Your Own Latent(https://arxiv.org/abs/2006.07733) [Colab Version Available]
Official repository for the "Multiple wavefield solutions in physics-informed neural networks using latent representation" paper.
A study on the effect of normalization in predictions by CNN models
Latent-Explorer is the Python implementation of the framework proposed in the paper "Unveiling LLMs: The Evolution of Latent Representations in a Dynamic Knowledge Graph".
Investigate mapping of articulations from the image space to the latent space using neural networks.
TensorFlow code and LaTex for Bachelor Thesis: Understanding Variational Autoencoders' Latent Representations of Remote Sensing Images 🌍
Working towards deliverable 5.3
📜 [MIDL 2022] "Sensor to Image Heterogeneous Domain Adaptation Network", Ishikaa Lunawat, Vignesh S, S P Sharan
Latent Representation and Exploration of Images Using Variational AutoEncoders
This algorithm exploits the relationships between variables to improve the reconstruction performance of the variational autoencoder (VAE). A correlation score was used as the metric to group the features via a distance-based clustering method. The resulting clusters served as inputs for the Attention-Based VAE.
Add a description, image, and links to the latent-representations topic page so that developers can more easily learn about it.
To associate your repository with the latent-representations topic, visit your repo's landing page and select "manage topics."