備註請放最後面 違者新聞文章刪除
1.媒體來源:
msn.com
2.記者署名:
Zac Bowden
3.完整新聞標題:
Microsoft announces distilled DeepSeek R1 models for Windows 11 Copilot+ PCs
微軟宣布為 Windows 11 Copilot+ PC 推出知識蒸餾版 DeepSeek R1 模型
4.完整新聞內文:
Microsoft has announced that it will be bringing “NPU-optimized” versions
of the DeepSeek-R1 AI model to Copilot+ PCs soon, first with Snapdragon X
devices, followed by PCs with Intel Lunar Lake and AMD Ryzen AI 9 processors.
The first release will be the DeepSeek-R1-Distill-Qwen-1.5B model, and will
be available via the Microsoft AI Toolkit for developers. 7B and 14B variants
will arrive later.
Windows 11 Copilot+ PCs are devices that are equipped with at least 256GB
storage, 16GB RAM, and an NPU that can output a minimum of 40 TOPS of power.
This means some older NPU-equipped PCs won't be able to run these models
locally.
"These optimized models let developers build and deploy AI-powered
applications that run efficiently on-device, taking full advantage of the
powerful NPUs in Copilot+ PCs” says a Microsoft blog post announcing
DeepSeek R1 support. “With our work on Phi Silica, we were able to harness
highly efficient inferencing – delivering very competitive time to first
token and throughput rates, while minimally impacting battery life and
consumption of PC resources … Additionally, we take advantage of Windows
Copilot Runtime (WCR) to scale across the diverse Windows ecosystem with ONNX
QDQ format.”
In the blog post, Microsoft highlights how it worked to ensure the R1 models
could run locally on NPU-based hardware. “First, we leverage a sliding
window design that unlocks super-fast time to first token and long context
support despite not having dynamic tensor support in the hardware stack.
Second, we use the 4-bit QuaRot quantization scheme to truly take advantage
of low bit processing.”
Microsoft says the 1.5B Distilled R1 model will be available soon, and that
it will be accessible via the AI Toolkit extension in VS Code. Developers can
use Playground to experiment with DeepSeek R1 locally on compatible Copilot+
PCs. In addition to supporting DeepSeek R1 locally, Microsoft is also making
these AI models available in the cloud via Azure AI Foundry. “As part of
Azure AI Foundry, DeepSeek R1 is accessible on a trusted, scalable, and
enterprise-ready platform, enabling businesses to seamlessly integrate
advanced AI while meeting SLAs, security, and responsible AI commitments—all
backed by Microsoft’s reliability and innovation.”
Microsoft has moved fast to support DeepSeek R1, even as US tech firms panic
over its existence. OpenAI now claims that DeepSeek has stolen proprietary
code to develop their AI model, which cost less than $10 million to develop.
This stands in stark contrast to the AI models developed by US firms, which
has cost billions so far.
微軟宣布即將為 Copilot+ PC 推出 「NPU 優化」版 的 DeepSeek-R1 AI 模型,首先登
陸 Snapdragon X 設備,隨後將支援搭載 Intel Lunar Lake 和 AMD Ryzen AI 9 處理器
的 PC。首個發布的版本將是 DeepSeek-R1-Distill-Qwen-1.5B,並將透過 Microsoft
AI Toolkit 提供給開發者使用。7B 和 14B 版本則會在稍後推出。
(備註:Qwen是阿里巴巴的開源模型通義千問)
Windows 11 Copilot+ PC 的要求
Windows 11 Copilot+ PC 需要至少 256GB 儲存空間、16GB RAM,
以及 至少 40 TOPS 的 NPU 算力。這意味著一些較舊的 NPU 設備將無法在本地運行這些
AI 模型。
微軟在官方部落格中表示:
「這些優化後的模型讓開發者能夠構建並部署高效運行於本地的 AI 應用,充分發揮
Copilot+ PC 強大 NPU 的優勢。」「透過我們在 Phi Silica 上的努力,我們成功提升
推理效率,不僅實現了極具競爭力的首次輸出時間與吞吐率,還能最大限度降低電池
消耗及 PC 資源使用量……此外,我們還利用 Windows Copilot Runtime (WCR) 來支
援 ONNX QDQ 格式,使其能夠在 Windows 生態系統中廣泛運行。」
DeepSeek R1 的優化技術
在部落格文章中,微軟還詳細介紹了如何讓 R1 模型能夠本地運行於 NPU 硬體 上:
滑動視窗設計(Sliding Window Design)
即使硬體不支援動態張量(Dynamic Tensor),仍然能夠實現 極快的首次輸出時間 並
支援長上下文。
4-bit QuaRot 量化技術
充分利用 低位元計算,提升 AI 模型運行效率。
微軟表示,1.5B Distilled R1 模型 很快就會上線,開發者可以透過 VS Code 的 AI
Toolkit 擴展 獲取。此外,開發者還可以在 Playground 中本地運行 DeepSeek R1,前
提是他們擁有符合條件的 Copilot+ PC。
Azure AI Foundry 雲端支援
除了本地支援 DeepSeek R1 之外,微軟還計劃透過 Azure AI Foundry 提供這些 AI
模型:「作為 Azure AI Foundry 的一部分,DeepSeek R1 將能夠在一個 值得信賴、
可擴展、企業級的平台 上運行,使企業能夠無縫整合先進 AI,同時符合 SLA、資安
規範及負責任 AI 的承諾——這一切都由微軟的可靠性與創新提供支持。」
美國科技界對 DeepSeek R1 的反應
微軟迅速行動支持 DeepSeek R1,與此同時,美國科技企業卻因其存在而陷入恐慌
。OpenAI 目前聲稱 DeepSeek 竊取了專有程式碼 來開發其 AI 模型,而 DeepSeek R1
的開發成本僅 不到 1000 萬美元。相比之下,美國企業開發的 AI 模型已耗資數十億美
元,這一差距引發了更多關注。
5.完整新聞連結 (或短網址)不可用YAHOO、LINE、MSN等轉載媒體:
https://bit.ly/42z9B7O
6.備註:
微軟目前是OpenAI最大的單一法人股東
不但要在雲端支援,微軟還會協助將DeepSeek轉換成NPU友善的模型格式,讓大家都可以
在AI PC上使用到DeepSeek-R1的本地模型
DeepSeek-R1有兩個知識蒸餾版本,一個用llama一個用Qwen,根據驗證Qwen表現比llama
好一些