Re: [取暖] DeepSeek

作者: SnowWolff (Ahwoo)   2025-02-04 00:47:53
※ 引述 《SnowWolff (雪糕)》 之銘言:
: → surimodo: 我感覺是反爬蟲 但要看完整報導才能確定= = 02/04 00:39
https://cnn.it/4hrCMyj
這邊你看看
Controlling the narrative?
Observers say that these differences have significant implications for free spee
ch and the shaping of global public opinion. That spotlights another dimension o
f the battle for tech dominance: who gets to control the narrative on major glob
al issues, and history itself.
An audit by US-based information reliability analytics firm NewsGuard released W
ednesday said DeepSeek’s older V3 chatbot model failed to provide accurate info
rmation about news and information topics 83% of the time, ranking it tied for 1
0th out of 11 in comparison to its leading Western competitors. It’s not clear
how the newer R1 stacks up, however.
DeepSeek becoming a global AI leader could have “catastrophic” consequences, s
aid China analyst Isaac Stone Fish.
“It would be incredibly dangerous for free speech and free thought globally, be
cause it hives off the ability to think openly, creatively and, in many cases, c
orrectly about one of the most important entities in the world, which is China,
” said Fish, who is the founder of business intelligence firm Strategy Risks.
That’s because the app, when asked about the country or its leaders, “present
China like the utopian Communist state that has never existed and will never exi
st,” he added.
In mainland China, the ruling Chinese Communist Party has ultimate authority ove
r what information and images can and cannot be shown – part of their iron-fist
ed efforts to maintain control over society and suppress all forms of dissent. A
nd tech companies like DeepSeek have no choice but to follow the rules.
Because the technology was developed in China, its model is going to be collecti
ng more China-centric or pro-China data than a Western firm, a reality which wil
l likely impact the platform, according to Aaron Snoswell, a senior research fel
low in AI accountability at the Queensland University of Technology Generative A
I Lab.
The company itself, like all AI firms, will also set various rules to trigger se
t responses when words or topics that the platform doesn’t want to discuss aris
e, Snoswell said, pointing to examples like Tiananmen Square.
In addition, AI companies often use workers to help train the model in what kind
s of topics may be taboo or okay to discuss and where certain boundaries are, a
process called “reinforcement learning from human feedback” that DeepSeek said
in a research paper it used.
“That means someone in DeepSeek wrote a policy document that says, ‘here are t
he topics that are okay and here are the topics that are not okay.’ They gave t
hat to their workers … and then that behavior would have been embedded into the
model,” he said.
US AI chatbots also generally have parameters – for example ChatGPT won’t tell
a user how to make a bomb or fabricate a 3D gun, and they typically use mechani
sms like reinforcement learning to create guardrails against hate speech, for ex
ample.
“That’s how every other company makes these models behave better,” Snoswell s
aid.
“But it’s just that in this case, chances are that a Chinese company embedded
(China’s official) values into their policy.”
作者: surimodo (好吃棉花糖)   2025-02-04 00:59:00
用來生物識別有點微妙 我還是傾向反爬

Links booklink

Contact Us: admin [ a t ] ucptt.com