比较随机搜索和网格搜索以进行超参数估计#

比较随机搜索和网格搜索以优化具有 SGD 训练的线性 SVM 的超参数。所有影响学习的参数都同时搜索(除了估计器的数量,它会带来时间/质量权衡)。

随机搜索和网格搜索探索完全相同的参数空间。参数设置的结果非常相似,而随机搜索的运行时间大大降低。

随机搜索的性能可能略差,这可能是由于噪声效应造成的,并且不会延续到保留的测试集。

请注意,在实践中,人们不会使用网格搜索同时搜索如此多的不同参数,而是只选择被认为最重要的参数。

RandomizedSearchCV took 1.12 seconds for 15 candidates parameter settings.
Model with rank: 1
Mean validation score: 0.991 (std: 0.006)
Parameters: {'alpha': 0.05063247886572012, 'average': False, 'l1_ratio': 0.13822072286080167}

Model with rank: 2
Mean validation score: 0.987 (std: 0.014)
Parameters: {'alpha': 0.010877306503748912, 'average': True, 'l1_ratio': 0.9226260871125187}

Model with rank: 3
Mean validation score: 0.976 (std: 0.023)
Parameters: {'alpha': 0.7271482064048191, 'average': False, 'l1_ratio': 0.25183501383331797}

GridSearchCV took 3.85 seconds for 60 candidate parameter settings.
Model with rank: 1
Mean validation score: 0.993 (std: 0.011)
Parameters: {'alpha': 0.09999999999999999, 'average': False, 'l1_ratio': 0.1111111111111111}

Model with rank: 2
Mean validation score: 0.987 (std: 0.013)
Parameters: {'alpha': 0.01, 'average': False, 'l1_ratio': 0.5555555555555556}

Model with rank: 3
Mean validation score: 0.987 (std: 0.007)
Parameters: {'alpha': 0.01, 'average': False, 'l1_ratio': 0.0}

from time import time

import numpy as np
import scipy.stats as stats

from sklearn.datasets import load_digits
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV

# get some data
X, y = load_digits(return_X_y=True, n_class=3)

# build a classifier
clf = SGDClassifier(loss="hinge", penalty="elasticnet", fit_intercept=True)


# Utility function to report best scores
def report(results, n_top=3):
    for i in range(1, n_top + 1):
        candidates = np.flatnonzero(results["rank_test_score"] == i)
        for candidate in candidates:
            print("Model with rank: {0}".format(i))
            print(
                "Mean validation score: {0:.3f} (std: {1:.3f})".format(
                    results["mean_test_score"][candidate],
                    results["std_test_score"][candidate],
                )
            )
            print("Parameters: {0}".format(results["params"][candidate]))
            print("")


# specify parameters and distributions to sample from
param_dist = {
    "average": [True, False],
    "l1_ratio": stats.uniform(0, 1),
    "alpha": stats.loguniform(1e-2, 1e0),
}

# run randomized search
n_iter_search = 15
random_search = RandomizedSearchCV(
    clf, param_distributions=param_dist, n_iter=n_iter_search
)

start = time()
random_search.fit(X, y)
print(
    "RandomizedSearchCV took %.2f seconds for %d candidates parameter settings."
    % ((time() - start), n_iter_search)
)
report(random_search.cv_results_)

# use a full grid over all parameters
param_grid = {
    "average": [True, False],
    "l1_ratio": np.linspace(0, 1, num=10),
    "alpha": np.power(10, np.arange(-2, 1, dtype=float)),
}

# run grid search
grid_search = GridSearchCV(clf, param_grid=param_grid)
start = time()
grid_search.fit(X, y)

print(
    "GridSearchCV took %.2f seconds for %d candidate parameter settings."
    % (time() - start, len(grid_search.cv_results_["params"]))
)
report(grid_search.cv_results_)

脚本的总运行时间:(0 分钟 4.979 秒)

相关示例

使用交叉验证的网格搜索的自定义重拟合策略

使用交叉验证的网格搜索的自定义重拟合策略

连续减半迭代

连续减半迭代

管道:将 PCA 和逻辑回归链接起来

管道:将 PCA 和逻辑回归链接起来

连接多个特征提取方法

连接多个特征提取方法

由 Sphinx-Gallery 生成的图库