大家好,想請問一個CV(cross validation)的問題
我用sklearn model_selection 兩個不同CV的方法
1. train_test_split
2. StratifiedKFold
結果同一個model train出來
1. train acc ~90%, test acc ~75%(overfitting)
2. train acc ~90%, test acc ~30%(average acc)
為什麼在testing上差距會這麼大啊?
代表我用方法1 train出來是超級無敵overfitting嗎?
或是我的DataSet是根本無法分析的?
還是我腦殘code寫錯了?
keras log上
方法1 val_acc會跟著train acc上升,但是方法2每個round都是死魚在30%
python code:
1.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=87)
2.
skf = StratifiedKFold(n_splits=4)
for train_index, test_index in skf.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
麻煩各位指點一下!感謝
//