TU Munich: New AI chip calculates locally and independently of the cloud

The new AI chip mounted on a circuit board. (© Andreas Heddergott/TUM)

A team at the Technical University of Munich (TUM) led by Prof. Hussam Amrouch has developed an AI chip that works without an internet connection or cloud server. The architecture of the chip, called AI Pro, is based on the human brain. Its neuromorphic structure allows data to be processed directly on the device – this significantly increases data security and saves energy at the same time. The first prototypes have already been produced by semiconductor manufacturer GlobalFoundries in Dresden.

In contrast to conventional processors, where the computing and memory units are separate, the AI Pro combines both components. This is made possible by so-called hyperdimensional computing: The chip recognizes patterns and similarities without having to rely on large amounts of training data. Instead of analyzing millions of images – as is common with deep learning models – it processes abstract features, such as the fact that cars have four wheels and move on roads. Prof. Amrouch, who is working on future chip generations in his Garching laboratory at the Siemens Technology Center, explains: „Humans learn by making connections – our chip does exactly that.“


For Prof. Hussam Amrouch, holder of the professorship for „AI Processor Design“ at TUM, one thing is clear: „The future belongs to those who control the hardware.“ (© Andreas Heddergott/TUM)


The low energy requirement is particularly remarkable. When training a simple example, the chip consumed just 24 microjoules – significantly less than conventional AI chips, which require ten to a hundred times as much. The AI Pro therefore sets new standards in energy efficiency. Prof. Amrouch sees this as an important basis for specialized, sustainable AI applications.

In contrast to universal cloud AI platforms such as those from NVIDIA, the AI Pro focuses on individual solutions. Although the chip is significantly less complex than high-end GPUs with around 10 million transistors, Amrouch says this was a deliberate choice: „Our focus is on decentralized intelligence – directly in the device, without detours via the cloud.“ This not only reduces latency, but also the carbon footprint by eliminating data center usage.

Another advantage is that the chips can be customized for specific applications, and because the data does not leave the device, data protection problems and unstable Internet connections are not an issue. (oe)

Original message