Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Verification on the GP tuner accuracy when perform HPO/Save GP tuner iteratively #5756

Closed Answered by haison19952013
haison19952013 asked this question in Q&A
Discussion options

You must be logged in to vote
  • Solved the problem by manually using the functions inside nni package. The code verification was done on a Forrester toy function:
from nni.algorithms.hpo.gp_tuner import GPTuner, GPClassArgsValidator
import os
from datetime import datetime
import pickle
import matplotlib.pyplot as plt
import numpy as np

search_space = {
    'x': {'_type': 'uniform', '_value': [0,1]}
}

def hf_function(x):
    import numpy as np

    return ((x * 6 - 2) ** 2) * np.sin((x * 6 - 2) * 2)

n_initial = 5
n_max = 20
GP_tuner = GPTuner(optimize_mode="minimize", utility='ei', kappa=5, xi=0, nu=2.5, alpha=1e-6, cold_start_num=n_initial, selection_num_warm_up=100000, selection_num_starting_points=250)
GP_tuner.u…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by haison19952013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant