栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

optuna自动调参框架对lgb的超参进行优化

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

optuna自动调参框架对lgb的超参进行优化

1、摘要

本文主要讲解:使用optuna自动调参框架对lgb的超参进行优化
主要思路:

  1. 设置超参数范围
  2. 设置调参要运行的方法
  3. 使用optuna进行调参
  4. 输出最优参数,使用最优参数进行调参
2、数据介绍

请私聊

3、相关技术

主要的调参原理如下:
1 采样算法
利用 suggested 参数值和评估的目标值的记录,采样器基本上不断缩小搜索空间,直到找到一个最佳的搜索空间,其产生的参数会带来更好的目标函数值。
2 剪枝算法
自动在训练的早期(也就是自动化的 early-stopping)终止无望的 trial

4、完整代码和步骤

主运行程序入口

import lightgbm as lgb
import numpy as np
import optuna
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
import pandas as pd
def pdReadCsv(file, sep):
    try:
        data = pd.read_csv(file, sep=sep, encoding='utf-8', error_bad_lines=False, engine='python')
        return data
    except:
        try:
            data = pd.read_csv(file, sep=sep, encoding='gb18030', error_bad_lines=False, engine='python')
            return data
        except:
            data = pd.read_csv(file, sep=sep, encoding='gbk', error_bad_lines=False, engine='python')
            return data


src = 'models\'

K = 5
seed = 1234
skf = KFold(n_splits=K, shuffle=True, random_state=seed)

train_path = r'data.csv'
df_train = pdReadCsv(train_path, ',')


def load_data():

    X_train = df_train.drop('度', axis=1)
    y_train = df_train['度']
    return X_train, y_train, X_train, y_train


def objective(trial):
    params = {'nthread': -1, 'max_depth': trial.suggest_int('max_depth', 10, 100),
              'learning_rate': trial.suggest_uniform('learning_rate', 0, 1),
              'bagging_fraction': trial.suggest_uniform('bagging_fraction', 0, 1),
              'num_leaves': trial.suggest_int('num_leaves', 10, 100), 'objective': 'regression',
              'feature_fraction': trial.suggest_uniform('feature_fraction', 0, 1), 'lambda_l1': 0,
              'lambda_l2': 0, 'bagging_seed': 100, 'metric': ['rmse']}
    oof1 = np.zeros(len(X_train))
    for n, (train_index, val_index) in enumerate(skf.split(X_train, y_train)):
        print("fold {}".format(n))
        X_tr, X_val = X_train.iloc[train_index], X_train.iloc[val_index]
        y_tr, y_val = y_train.iloc[train_index], y_train.iloc[val_index]
        lgb_train = lgb.Dataset(X_tr, y_tr)
        lgb_val = lgb.Dataset(X_val, y_val)
        clf = lgb.train(params, lgb_train, num_boost_round=88, valid_sets=[lgb_train, lgb_val])
        oof1[val_index] = clf.predict(X_val, num_iteration=clf.best_iteration)
    mse = mean_squared_error(y_train, oof1)
    print('train_score : ', mse)
    return mse


X_train, y_train, X_test, y_test = load_data()
study = optuna.create_study(direction='minimize')
n_trials = 89
study.optimize(objective, n_trials=n_trials)
print(study.best_value)
best = study.best_params
print(best)
params = {'nthread': -1, 'max_depth': best['max_depth'],
          'learning_rate': best['learning_rate'], 'bagging_fraction': best['bagging_fraction'],
          'num_leaves': best['num_leaves'], 'objective': 'regression',
          'feature_fraction': best['feature_fraction'], 'lambda_l1': 0,
          'lambda_l2': 0, 'bagging_seed': 100, 'metric': ['rmse']}
train_index = np.arange(0, X_train.shape[0] - 10)
val_index = np.arange(X_train.shape[0] - 10, X_train.shape[0])
X_tr, X_val = X_train.iloc[train_index], X_train.iloc[val_index]
y_tr, y_val = y_train.iloc[train_index], y_train.iloc[val_index]
lgb_train = lgb.Dataset(X_tr, y_tr)
lgb_val = lgb.Dataset(X_val, y_val)
clf = lgb.train(params, lgb_train, num_boost_round=88, valid_sets=[lgb_train, lgb_val])
oof1 = np.zeros(len(X_train))
oof1[val_index] = clf.predict(X_val, num_iteration=clf.best_iteration)
mse = mean_squared_error(y_train, oof1)
print('train_score : ', mse)
clf.save_model(src + 'ane.pkl')

如果有问题或需要帮忙请私聊

5、学习链接

调参神器optuna学习笔记

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/861415.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号