Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

details about the decoder architecture #35

Open
geruome opened this issue Apr 29, 2024 · 0 comments
Open

details about the decoder architecture #35

geruome opened this issue Apr 29, 2024 · 0 comments

Comments

@geruome
Copy link

geruome commented Apr 29, 2024

Hi, I am very interested in this series of work and have a question about the decoder architecture.
I have noticed that the decoder architecture is a little bit different from which in the original Transformer paper.
1714382902050
image-15
The differences are circled in red. The K in QKV of second attention block and the skip connection are from encoder in our methods, while from decoder originally.
Is it any improvement of the transformer architecture from some existing papers or just our design?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant