# Transformer Architecture

Transformer architecture refers to the specific neural network design uncovered and implemented by Google's AI team, Google Brain.

Superseding its predecessor architecture known as RNN (Recurrent Neural Networks), transformer architecture is an adaptive approach to data processing. Rather than sequencing steams of information in chronological strings, transformer architectures process entire blocks of data simultaneously and substitute form factors to optimize data coherence.

Rooted in identifying core concepts by accurately allocating its focus, transformer architecture displaces excessive processing by applying four components: Attention Mechanisms, Multi-head Attention, Feed-Forward Layers, and Normalization Layers.&#x20;

ChainGPT makes good use of the transformer architecture in the design of its AI by allowing users the ability to provide theoretically unlimited input requests and being able to handle them aptly.

[**Disclaimer**](/misc/legal-docs/disclaimer.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.chaingpt.org/overview/learn-the-concepts/transformer-architecture.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
