Skip to content
rtlearner 로고 파일

RTLearner

The story of a non-major becoming an RTL engineer

  • Home
  • Blog
  • About
rtlearner 로고 파일
RTLearner
The story of a non-major becoming an RTL engineer
  • AI & HW Fundamentals

    AI Architecture 4. Training vs. Inference

    In previous posts, we learned that MAC operations, Memory Hierarchies, and Parallel Processing (SIMD) form the foundation of deep learning hardware. Now, we stand at the biggest crossroad in the AI semiconductor market:

  • Matrix Multiplication operations
    AI & HW Fundamentals

    AI Architecture 3. The Aesthetics of MatMul: Why Deep Learning Chooses GPUs/NPUs

    In previous posts, we examined the significant hardware costs incurred by the fundamental operation of artificial neurons, the MAC (Multiply-Accumulate),

  • Sigmoid & Tanh
    AI & HW Fundamentals

    AI Architecture 2. The Cost of Activation: Free ReLU vs. Expensive Sigmoid

    In the previous post, we examined how expensive the MAC (Multiply-Accumulate) operation—the core of artificial neurons—is in terms of hardware, specifically regarding Multiplier area and Memory Bandwidth.

  • AI & HW Fundamentals

    AI Architecture 1. Anatomy of an Artificial Neuron: Y=WX+B on Silicon

    When starting with deep learning, the first concept we encounter is the Perceptron,

Page navigation

Previous PagePrevious 1 2 3

Search

Category

  • RTL engineer
    • FPGA
    • Verilog
  • Semiconductor process
  • RRAM Research
  • AI Architecture
    • AI & HW Fundamentals

Sitemap

  • Home
  • Blog
  • About

Category

  • RTL engineer
  • Semiconductor process

Information

  • Privacy policy
  • Terms of Use

Copyright © 2026 RTLearner.

Scroll to top
  • Home
  • Blog
  • About
English
Korean