I don't know if "logarithmic regression" is the right term, I need to fit a curve on my data, like a polynomial curve but going flat on the end.
Here is an image, the blue curve is what I have (2nd order polynomial regression) and the magenta curve is what I need.
I have search a lot and can't find that, only linear regression, polynomial regression, but no logarithmic regression on sklearn. I need to plot the curve and then make predictions with that regression.
Here is the data for the plot image that I posted:
If I understand correctly, you want to fit the data with a function like y = a * exp(-b * (x - c)) + d.
I am not sure if sklearn can do it. But you can use scipy.optimize.curve_fit() to fit your data with whatever the function you define.(scipy):
For your case, I experimented with your data and here is the result:
import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit my_data = np.genfromtxt('yourdata.csv', delimiter=',') my_data = my_data[my_data[:,0].argsort()] xdata = my_data[:,0].transpose() ydata = my_data[:,1].transpose() # define a function for fitting def func(x, a, b, c, d): return a * np.exp(-b * (x - c)) + d init_vals = [50,0,90,63] # fit your data and getting fit parameters popt, pcov = curve_fit(func, xdata, ydata, p0=init_vals, bounds=. ([0,0,90,0], [1000, 0.1, 200, 200])) # predict new data based on your fit y_pred = func(200, *popt) print(y_pred) plt.plot(xdata, ydata, 'bo', label='data') plt.plot(xdata, func(xdata, *popt), '-', label='fit') plt.xlabel('x') plt.ylabel('y') plt.legend() plt.show()
I found that the initial value for
b is critical for fitting. I estimated a small range for it and then fit the data.
If you have no priori knowledge of the relationship between
y, you can use the regression methods provided by sklearn, like linear regression, Kernel ridge regression (KRR), Nearest Neighbors Regression, Gaussian Process Regression etc. to fit nonlinear data. Find the documentation here