This content originally appeared on DEV Community and was authored by Rijul Rajesh
In the previous article, we generated the first output word from the transformer.
So far, the translation is correct. However, the decoder does not stop until it produces an <EOS> token.
Feeding the Output Back into the Decoder
Now, we take the translated word “vamos” and feed it back into a copy of the decoder’s embedding layer to continue the process.
Just like before, we repeat the same steps:
- Get the word embeddings for vamos
- Add positional encoding
- Calculate self-attention values using the same weights used for the
<EOS>token - Add residual connections
- Compute encoder–decoder attention using the same set of weights
- Add another set of residual connections
Generating the Next Word
Next, we pass the values representing “vamos” through the same fully connected layer and softmax function that we used earlier.
This time, the decoder outputs the <EOS> token, which signals the end of the sentence.
Final Output
At this point, the decoding process is complete.
We have successfully translated the input phrase using the transformer.
So, just to recap, the transformer works as follows:
- Word embeddings convert words into numerical representations
- Positional encoding keeps track of word order
- Self-attention captures relationships within the input and output
- Encoder–decoder attention connects input and output, ensuring important information is preserved
- Residual connections help different components focus on specific tasks and improve training
In the next article, we will start exploring decoder-only transformers.
Looking for an easier way to install tools, libraries, or entire repositories?
Try Installerpedia: a community-driven, structured installation platform that lets you install almost anything with minimal hassle and clear, reliable guidance.
Just run:
ipm install repo-name
… and you’re done! 🚀
This content originally appeared on DEV Community and was authored by Rijul Rajesh
Rijul Rajesh | Sciencx (2026-05-04T17:50:45+00:00) Understanding Transformers Part 18: Completing the Decoding Process. Retrieved from https://www.scien.cx/2026/05/04/understanding-transformers-part-18-completing-the-decoding-process/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.




