Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
lzhbrian 's Collections
NN Arch
NN Arch Components
Loop
Linear Attention
TTT

NN Arch Components

updated 8 days ago
Upvote
-

  • STEM: Scaling Transformers with Embedding Modules

    Paper • 2601.10639 • Published 11 days ago • 1

  • Deep Delta Learning

    Paper • 2601.00417 • Published 25 days ago • 33

  • mHC: Manifold-Constrained Hyper-Connections

    Paper • 2512.24880 • Published 26 days ago • 282

  • VersatileFFN: Achieving Parameter Efficiency in LLMs via Adaptive Wide-and-Deep Reuse

    Paper • 2512.14531 • Published Dec 16, 2025 • 14

  • Stronger Normalization-Free Transformers

    Paper • 2512.10938 • Published Dec 11, 2025 • 20

  • Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free

    Paper • 2505.06708 • Published May 10, 2025 • 10

  • Transformers without Normalization

    Paper • 2503.10622 • Published Mar 13, 2025 • 170

  • Forgetting Transformer: Softmax Attention with a Forget Gate

    Paper • 2503.02130 • Published Mar 3, 2025 • 32

  • Hyper-Connections

    Paper • 2409.19606 • Published Sep 29, 2024 • 26

  • Virtual Width Networks

    Paper • 2511.11238 • Published Nov 14, 2025 • 38
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs