Fw: [新聞] JVC回函澄清:蔣偉寧是無辜受害者之一

作者: bmka (偶素米蟲)   2014-07-16 18:50:53
抄襲 或者是 亂掛名
反正兩者都違反學術倫理
蔣部長就自己選一條罪名吧
※ [本文轉錄自 AfterPhD 看板 #1JnQjBQ1 ]
作者: bmka (偶素米蟲) 看板: AfterPhD
標題: Re: [新聞] JVC回函澄清:蔣偉寧是無辜受害者之一
時間: Wed Jul 16 06:29:29 2014
我希望科技部把蔣前部長這兩篇文章印出來比對一下
文章A:
Chen, Chen-Wu, Po-Chen Chen, and Wei-Ling Chiang.
"Modified intelligent genetic algorithm-based
adaptive neural network control for uncertain structural systems."
Journal of Vibration and Control 19.9 (2013): 1333-1347.
文章B:
Chen, C. W., P. C. Chen, and W. L. Chiang.
"Stabilization of adaptive neural network controllers for nonlinear
structural systems using a singular perturbation approach."
Journal of Vibration and Control 17.8 (2011): 1241-1252.
很明顯*至少*是self-plagiarism (這也是違反學術倫理的抄襲)
蔣前部長不要再說自己沒抄襲了啦
臉會很腫的
因為數學式子難顯示, 我只節錄這兩篇paper的Introduction的幾個(連續)段落供比較
文章A:
...Many NN systems, which are essentially intelligent inference systems
implemented in the framework of adaptive networks, have been
developed to model or control nonlinear plants with remarkable results.
The desired performance can be obtained with fewer adjustable
parameters, although sometimes more training is required to achieve
the higher accuracy derived from the transfer function and the learning
algorithm. In addition to these features, NNs also act as a universal
approximator (Hartman et al., 1990; Funahashi and Nakamura, 1993)
where the feedforward network isvery important. A backpropagation
algorithm (Hecht-Nielsen, 1989; Ku and Lee, 1995), is usually used in
the feedforward type of NN but heavy and complicated learning is
needed to tune each network weight. Aside from the backpropagation
type of NN, another common feedforward NN is the radial basis function
network (RBFN) (Powell, 1987, 1992; Park and Sandberg, 1991).
文章B:
...Many NN systems, which are essentially intelligent inference systems
implemented in the framework of adaptive networks, have been
developed to model or control nonlinear plants, with remarkable results.
The desired performance can be obtained with fewer adjustable
parameters, although sometimes more training derived from the
transfer function and the learning algorithm is needed to achieve
sufficient accuracy. In addition, NN also acts as a universal approximator
so the feedforward network is very important (Hartman et al., 1990;
Funahashi and Nakamura, 1993). A backpropagation algorithm is usually
used in the feedforward type of NN, but this necessitates heavy and
complicated learning to tune each network weight (Hecht-Nielsen, 1989;
Ku and Lee, 1995). Besides the backpropagation type of NN, another
common feedforward NN is the radial basis function network (RBFN)
(Powell, 1987, 1992; Park and Sandberg, 1991).
文章A:
RBFNs use only one hidden layer. The transfer function of the hidden
layer is a nonlinear semi-affine function. Obviously, the learning rate
of the RBFN will be faster than that of the backpropagation network.
Furthermore, the RBFN can approximate any nonlinear continuous
function and eliminate local minimum problems (Powell, 1987, 1992;
Park and Sandberg, 1991). These features mean that the RBFN is
usually used for real-time control in nonlinear dynamic systems.
Some results indicate that, under certain mild function conditions,
the RBFN is capable of universal approximations (Park and Sandberg,
1991; Powell, 1992).
文章B:
The RBFN requires the use of only one hidden layer, and the transfer
function for the hidden layer is a nonlinear semi-affine function.
Obviously, the learning rate will be faster than that of the backpropagation
network. Furthermore, one can approximate any nonlinear continuous
function and eliminate local minimum problems with this method
(Powell, 1987, 1992; Park and Sandberg, 1991). Because of these features,
this technique is usually used for real-time control in nonlinear dynamic
systems. Some results indicate that, under certain mild function conditions,
the RBFN is even capable of universal approximations (Park and Sandberg,
1991; Powell, 1992).
文章A:
Adaptive algorithms can be utilized to find the best high-performance
parameters for the NN (Goodwin and Sin, 1984; Sanner and Slotine, 1992).
Adaptive laws have been designed for the Lyapunov synthesis approach
to tune the adjustable parameters of the RBFN, and analyze the stability
of the overall system. A genetic algorithm (GA) (Goldberg, 1989; Chen,
1998), is the usual optimization technique used in the self-learning or
training strategy to decide the initial values of the parameter vector.
This GA-based modified adaptive neural network controller (MANNC)
should improve the immediate response, the stability, and the robustness
of the control system
文章B:
Adaptive algorithms can be utilized to find the best high-performance
parameters for the NN. The adaptive laws of the Lyapunov synthesis
approach are designed to tune the adjustable parameters of the RBFN,
and analyze the stability of the overall system. A genetic algorithm (GA)
is the usual optimization technique used in the self-learning or training
strategy to decide the initial values included in the parameter vector
(Goldberg, 1989; Chen, 1998). The use of a GA-based adaptive neural
network controller (ANNC) should improve the immediate response,
stability, and robustness of the control system.
文章A:
Another common problem encountered when switching the control
input of the sliding model system is the so-called "chattering" phenomenon.
The smoothing of control discontinuity inside a thin boundary layer
essentially acts as a low-pass filter structure for the local dynamics, thus
eliminating chattering (Utkin, 1978; Khalil, 1996). The laws are updated
by the introduction of a boundary-layer function to cover parameter errors
and modeling errors, and to guarantee that the state errors converge
within a specified error bound.
文章B:
Another common problem encountered when switching the control
input of the sliding model system is the so-called “chattering” phenomenon.
Sometimes the smoothing of control discontinuity inside a thin boundary layer
essentially acts as a low-pass filter structure for the local dynamics, thus
eliminating chattering (Utkin, 1978; Khalil, 1996). The laws for this process
are updated by the introduction of a boundary-layer function to cover
parameter errors and modeling errors. This also guarantees that the
state errors converge within a specified error bound.
這不是抄襲,什麼才是抄襲?
延伸閱讀: The ethics of self-plagiarism
http://cdn2.hubspot.net/hub/92785/file-5414624-pdf/media/ith-selfplagiarism-whitepaper.pdf
Self-Plagiarism is defined as a type of plagiarism in which
the writer republishes a work in its entirety or reuses portions
of a previously written text while authoring a new work.
作者: jhyen (jhyen)   2014-07-16 06:39:00
其他的不要說,光這60篇被JVC退的找出來看就很精彩.......
作者: bmka (偶素米蟲)   2014-07-16 07:23:00
第二篇沒在這被查出的60篇裡面喔!看來未爆彈還很多
作者: MyDice (我愛林貞烈)   2014-07-16 08:10:00
科技部不會查這些 只能向JVC反應了
作者: wacomnow (無憂)   2014-07-16 08:19:00
推用心!記者快來抄呀
作者: WTFCAS (我愛黑襪寶貝)   2014-07-16 08:57:00
鍵盤又輸入錯誤了…
作者: flashegg (閃光蛋)   2014-07-16 10:42:00
第二篇(2011較早的這篇)沒在這被查出的60篇裡面表示有可能是經過真的學者審查通過的吧?然後2013這篇因為self-plagiarism,所以不敢被審?才套假帳號然後被JVC接受刊出,以上是個人看法
作者: bmka (偶素米蟲)   2014-07-16 10:49:00
那就要問蔣偉寧了..他只能抄襲跟完全沒看過paper二選一了
作者: flashegg (閃光蛋)   2014-07-16 10:50:00
再來CW Chen可以辯稱2013這篇是2011的續作
作者: bmka (偶素米蟲)   2014-07-16 10:50:00
我猜應該還有其他的paper是套用同一個模板寫出來的
作者: bmka (偶素米蟲)   2014-07-16 10:51:00
就算是續作,也不可以self-plagiarism,這是常識吧
作者: flashegg (閃光蛋)   2014-07-16 10:51:00
總之這種self-plagiarism在理工科paper不是沒有見過最後也是被系/院教評會發還重審,不了了之
作者: bmka (偶素米蟲)   2014-07-16 10:54:00
抄襲就是抄襲,學術界自有評論 :)
作者: flashegg (閃光蛋)   2014-07-16 10:55:00
而且要是CW Chen出來坦,說沒經過蔣同意就把老師掛上去純粹只是因為受過老師指導、或尊重老師等等還怕蔣不能安全下莊嗎?這也是個人看法~
作者: bmka (偶素米蟲)   2014-07-16 10:56:00
不告知掛個一篇那也就罷了,這麼多年來掛了一堆,還不知被掛然後CV上還大大方方的登錄...很難說得過去的其實我的猜測是蔣偉寧根本沒看過這些文章(貢品),只是他不敢承認這些不是他的research,他違反學術倫理掛了名但是敢收學生的貢品就要敢扛啊,不能出事就推給學生
作者: flashegg (閃光蛋)   2014-07-16 11:03:00
這就是在道德操守與人性上打轉啦假設CW Chen真的是在蔣不知情的狀況下把老師掛上去paper被接受之後才跟老師說有掛名一事有多少老師會說,不行你馬上把我的名字撤掉?我想還是會欣然接受的人比較多吧,還會覺得學生懂事呢
作者: bmka (偶素米蟲)   2014-07-16 11:05:00
那還是蔣的錯,正常的處理方式應該是嚴正警告學生不可以如此做這種事以後不可以發生
作者: flashegg (閃光蛋)   2014-07-16 11:07:00
我並非贊同蔣的行為,只是想說這種事真的是屢見不鮮
作者: bmka (偶素米蟲)   2014-07-16 11:07:00
學術圈很小,自己的名聲自己顧,何況是像蔣這種大咖
作者: bmka (偶素米蟲)   2014-07-16 11:08:00
我也了解屢見不鮮,但是敢做,出了事就別想卸責,如此而已要不是蔣一直卸責,我也懶得浪費時間看他們的廢文(越看越氣)還有,蔣也未免太饑不擇食,這種三流期刊的paper也要掛
作者: tainanuser (南南南)   2014-07-16 11:42:00
推,很用心!
作者: MyDice (我愛林貞烈)   2014-07-16 12:05:00
可以從科技部或是蔣的網頁看到他2010年以來的publication有多少嗎? 尤其是當校長部長這段期間論文任意掛名的情況有多嚴重
作者: ceries (no)   2014-07-16 14:53:00
厲害!
作者: jabari (Still不敢開槍的娘娘腔)   2014-07-16 16:27:00
請問這個可以推給學運嗎? 還是八年遺毒??
作者: jack5756 (Dilbert)   2014-07-16 17:09:00
真的都是學運的錯,而且很多Paper是八年遺毒
作者: MIT8818 (台灣製造)   2014-07-16 18:57:00
這內容能看出他無辜?
作者: soultakerna   2014-07-16 18:57:00
居然直接複製貼上XD
作者: bmka (偶素米蟲)   2014-07-16 18:58:00
他不無辜啊,蔣部長的文章是抄襲而且是self-plagiarism這點是賴不掉的
作者: soultakerna   2014-07-16 18:58:00
有改那麼一點點的樣子lol
作者: soria (soria)   2014-07-16 18:59:00
唷,自我抄襲嗎?
作者: soultakerna   2014-07-16 19:03:00
這幾段有reference嗎,抓不到原文我知道改一點點也是抄啦
作者: soria (soria)   2014-07-16 19:08:00
我知道他為什麼第一天急著撇清了,因為這些問題肯定越挖越多
作者: bmka (偶素米蟲)   2014-07-16 19:09:00
殭屍審查就是用在讓這種明顯有問題的文章蒙混過關
作者: walei98 (超和平buster)   2014-07-16 19:15:00
高調
作者: offish (offish)   2014-07-16 19:19:00
沒空細看,先高調
作者: soria (soria)   2014-07-16 19:37:00
魔鬼就藏在細節裡面

Links booklink

Contact Us: admin [ a t ] ucptt.com