抄襲 或者是 亂掛名
反正兩者都違反學術倫理
蔣部長就自己選一條罪名吧
※ [本文轉錄自 AfterPhD 看板 #1JnQjBQ1 ]
作者: bmka (偶素米蟲) 看板: AfterPhD
標題: Re: [新聞] JVC回函澄清:蔣偉寧是無辜受害者之一
時間: Wed Jul 16 06:29:29 2014
我希望科技部把蔣前部長這兩篇文章印出來比對一下
文章A:
Chen, Chen-Wu, Po-Chen Chen, and Wei-Ling Chiang.
"Modified intelligent genetic algorithm-based
adaptive neural network control for uncertain structural systems."
Journal of Vibration and Control 19.9 (2013): 1333-1347.
文章B:
Chen, C. W., P. C. Chen, and W. L. Chiang.
"Stabilization of adaptive neural network controllers for nonlinear
structural systems using a singular perturbation approach."
Journal of Vibration and Control 17.8 (2011): 1241-1252.
很明顯*至少*是self-plagiarism (這也是違反學術倫理的抄襲)
蔣前部長不要再說自己沒抄襲了啦
臉會很腫的
因為數學式子難顯示, 我只節錄這兩篇paper的Introduction的幾個(連續)段落供比較
文章A:
...Many NN systems, which are essentially intelligent inference systems
implemented in the framework of adaptive networks, have been
developed to model or control nonlinear plants with remarkable results.
The desired performance can be obtained with fewer adjustable
parameters, although sometimes more training is required to achieve
the higher accuracy derived from the transfer function and the learning
algorithm. In addition to these features, NNs also act as a universal
approximator (Hartman et al., 1990; Funahashi and Nakamura, 1993)
where the feedforward network isvery important. A backpropagation
algorithm (Hecht-Nielsen, 1989; Ku and Lee, 1995), is usually used in
the feedforward type of NN but heavy and complicated learning is
needed to tune each network weight. Aside from the backpropagation
type of NN, another common feedforward NN is the radial basis function
network (RBFN) (Powell, 1987, 1992; Park and Sandberg, 1991).
文章B:
...Many NN systems, which are essentially intelligent inference systems
implemented in the framework of adaptive networks, have been
developed to model or control nonlinear plants, with remarkable results.
The desired performance can be obtained with fewer adjustable
parameters, although sometimes more training derived from the
transfer function and the learning algorithm is needed to achieve
sufficient accuracy. In addition, NN also acts as a universal approximator
so the feedforward network is very important (Hartman et al., 1990;
Funahashi and Nakamura, 1993). A backpropagation algorithm is usually
used in the feedforward type of NN, but this necessitates heavy and
complicated learning to tune each network weight (Hecht-Nielsen, 1989;
Ku and Lee, 1995). Besides the backpropagation type of NN, another
common feedforward NN is the radial basis function network (RBFN)
(Powell, 1987, 1992; Park and Sandberg, 1991).
文章A:
RBFNs use only one hidden layer. The transfer function of the hidden
layer is a nonlinear semi-affine function. Obviously, the learning rate
of the RBFN will be faster than that of the backpropagation network.
Furthermore, the RBFN can approximate any nonlinear continuous
function and eliminate local minimum problems (Powell, 1987, 1992;
Park and Sandberg, 1991). These features mean that the RBFN is
usually used for real-time control in nonlinear dynamic systems.
Some results indicate that, under certain mild function conditions,
the RBFN is capable of universal approximations (Park and Sandberg,
1991; Powell, 1992).
文章B:
The RBFN requires the use of only one hidden layer, and the transfer
function for the hidden layer is a nonlinear semi-affine function.
Obviously, the learning rate will be faster than that of the backpropagation
network. Furthermore, one can approximate any nonlinear continuous
function and eliminate local minimum problems with this method
(Powell, 1987, 1992; Park and Sandberg, 1991). Because of these features,
this technique is usually used for real-time control in nonlinear dynamic
systems. Some results indicate that, under certain mild function conditions,
the RBFN is even capable of universal approximations (Park and Sandberg,
1991; Powell, 1992).
文章A:
Adaptive algorithms can be utilized to find the best high-performance
parameters for the NN (Goodwin and Sin, 1984; Sanner and Slotine, 1992).
Adaptive laws have been designed for the Lyapunov synthesis approach
to tune the adjustable parameters of the RBFN, and analyze the stability
of the overall system. A genetic algorithm (GA) (Goldberg, 1989; Chen,
1998), is the usual optimization technique used in the self-learning or
training strategy to decide the initial values of the parameter vector.
This GA-based modified adaptive neural network controller (MANNC)
should improve the immediate response, the stability, and the robustness
of the control system
文章B:
Adaptive algorithms can be utilized to find the best high-performance
parameters for the NN. The adaptive laws of the Lyapunov synthesis
approach are designed to tune the adjustable parameters of the RBFN,
and analyze the stability of the overall system. A genetic algorithm (GA)
is the usual optimization technique used in the self-learning or training
strategy to decide the initial values included in the parameter vector
(Goldberg, 1989; Chen, 1998). The use of a GA-based adaptive neural
network controller (ANNC) should improve the immediate response,
stability, and robustness of the control system.
文章A:
Another common problem encountered when switching the control
input of the sliding model system is the so-called "chattering" phenomenon.
The smoothing of control discontinuity inside a thin boundary layer
essentially acts as a low-pass filter structure for the local dynamics, thus
eliminating chattering (Utkin, 1978; Khalil, 1996). The laws are updated
by the introduction of a boundary-layer function to cover parameter errors
and modeling errors, and to guarantee that the state errors converge
within a specified error bound.
文章B:
Another common problem encountered when switching the control
input of the sliding model system is the so-called “chattering” phenomenon.
Sometimes the smoothing of control discontinuity inside a thin boundary layer
essentially acts as a low-pass filter structure for the local dynamics, thus
eliminating chattering (Utkin, 1978; Khalil, 1996). The laws for this process
are updated by the introduction of a boundary-layer function to cover
parameter errors and modeling errors. This also guarantees that the
state errors converge within a specified error bound.
這不是抄襲,什麼才是抄襲?
延伸閱讀: The ethics of self-plagiarism
http://cdn2.hubspot.net/hub/92785/file-5414624-pdf/media/ith-selfplagiarism-whitepaper.pdf
Self-Plagiarism is defined as a type of plagiarism in which
the writer republishes a work in its entirety or reuses portions
of a previously written text while authoring a new work.