原文標題:Exclusive-OpenAI set to finalize first custom chip design this year
原文連結:https://reurl.cc/M6xa8L
發布時間:2025-2-10
記者署名:Anna Tong, Max A. Cherney and Krystal Hu
原文內容:
OpenAI is pushing ahead on its plan to reduce its reliance on Nvidia for its
chip supply by developing its first generation of in-house
artificial-intelligence silicon.
The ChatGPT maker is finalizing the design for its first in-house chip in the
next few months and plans to send it for fabrication at Taiwan Semiconductor
Manufacturing Co, sources told Reuters. The process of sending a first design
through a chip factory is called "taping out."
OpenAI and TSMC declined to comment.
The update shows that OpenAI is on track to meet its ambitious goal of mass
production at TSMC in 2026. A typical tape-out costs tens of millions of
dollars and will take roughly six months to produce a finished chip, unless
OpenAI pays substantially more for expedited manufacturing. There is no
guarantee the silicon will function on the first tape out and a failure would
require the company to diagnose the problem and repeat the tape-out step.
Inside OpenAI, the training-focused chip is viewed as a strategic tool to
strengthen OpenAI's negotiating leverage with other chip suppliers, the
sources said. After the initial chip, OpenAI's engineers plan to develop
increasingly advanced processors with broader capabilities with each new
iteration.
If the initial tape out goes smoothly, it would enable the ChatGPT maker to
mass-produce its first in-house AI chip and potentially test an alternative
to Nvidia's chips later this year. OpenAI’s plan to send its design to TSMC
this year demonstrates the startup has made speedy progress on its first
design, a process that can take other chip designers years longer.
Big tech companies such as Microsoft and Meta have struggled to produce
satisfactory chips despite years of effort. The recent market rout triggered
by Chinese AI startup DeepSeek has also raised questions about whether fewer
chips will be needed in developing powerful models in the future.
The chip is being designed by OpenAI’s in-house team led by Richard Ho,
which had doubled in the past months to 40 people, in collaboration with
Broadcom. Ho joined OpenAI more than a year ago from Alphabet's Google where
he helped lead the search giant's custom AI chip program. Reuters first
reported OpenAI’s plans with Broadcom last year.
Ho's team is smaller than the large-scale efforts at tech giants such as
Google or Amazon. A new chip design for an ambitious, large-scale program
could cost $500 million for a single version of a chip, according to industry
sources with knowledge of chip design budgets. Those costs could double to
build the necessary software and peripherals around it.
Generative AI model makers like OpenAI, Google and Meta have demonstrated
that ever-larger numbers of chips strung together in data centers make models
smarter, and as a result, they have an insatiable demand for the chips.
Meta has said it will spend $60 billion on AI infrastructure in the next year
and Microsoft has said it will spend $80 billion in 2025. Currently, Nvidia’
s chips are the most popular and hold a market share of roughly 80%. OpenAI
is itself participating in the $500 billion Stargate infrastructure program
announced by U.S. President Donald Trump last month.
But rising costs and dependence on a single supplier have led major customers
such as Microsoft, Meta and now OpenAI to explore in-house or external
alternatives to Nvidia’s chips.
OpenAI's in-house AI chip, while capable of both training and running AI
models, will initially be deployed on a limited scale, and primarily for
running AI models, the sources said. The chip will have a limited role within
the company's infrastructure.
To build out an effort as comprehensive as Google or Amazon's AI chip
program, OpenAI would have to hire hundreds of engineers.
TSMC is manufacturing OpenAI's AI chip using its advanced 3-nanometer process
technology. The chip features a commonly used systolic array architecture
with high-bandwidth memory (HBM) - also used by Nvidia for its chips - and
extensive networking capabilities, sources said.
OpenAI 正加速自研 AI 晶片,以降低對 NVIDIA 的依賴,預計今年內完成設計並送交台
積電(TSMC)生產,目標 2026 年量產。該晶片主要用於 AI 推論,採 3 奈米製程、高
頻寬記憶體(HBM)與 systolic array 架構,具備強大網路能力。開發團隊由前
Google AI 晶片專家 Richard Ho 領導,並與 Broadcom 合作,團隊已擴增至 40 人。開
發一款高階 AI 晶片成本可能達 5 億美元。由於 AI 晶片需求激增,Microsoft、Meta
及 OpenAI 等企業正尋求 NVIDIA 以外的解決方案。若首款晶片 tape-out 成功,
OpenAI 可能於今年內測試其作為 NVIDIA 替代方案。此舉不僅提升議價能力,也有助於
降低 AI 運算成本,強化 OpenAI 的技術自主權。
心得/評論:
OpenAI預計晶片交給台積電代工
新聞消息一出
夜盤直接嘎上天
世界的G怎麼輸?
空蛙表示: