TGArchive
·2 хв читання · 244 слова·👁 27.9K25

🌐 Google wants to learn AI to surprise and forget

Most existing neural networks struggle to process long data streams and generate new knowledge through reasoning, even though these are key skills for solving real-world problems. The model's memory is a large unstructured matrix that is easy to "get confused" in.

Google Research engineers have developed a new family of AI architectures called Titans. Through multi-component memory, these architectures can bring AI closer to the capabilities of the human brain.

Main features:

🛑 Like humans, Titan's memory is made up of three segments: short-term (core) for the current task, long-term (to remember past data), and persistent for context-independent basic knowledge.

🛑 It learned to be "surprised": it more thoroughly memorizes unexpected information while also "forgetting" unused data to free up resources

🛑 Titans continue to learn while solving problems, continuously filtering information into necessary and unnecessary, while memory clusters constantly interact.

This new architecture outperforms GPT-4, Llama3-80B, and other models by up to two times in reasoning and information retrieval tests from very large texts, as well as in real-world tasks like DNA modeling.

Titans can operate with data exceeding 2M tokens (about 6,000 pages of text). Google's Gemini Experimental models can handle similar volumes but less effectively. For comparison, the context window of GPT-4 is 128k tokens.

More on the topic:

➡️ Machine Psychology: AI With an Ability to Reflect

➡️ What We Lack on the Path to AGI

#news #Google #science @hiaimediaen

Відкрити в Telegram
Повернутись до каналу