It's a regression problem with one feature inputted.

I wrote this script for fun and for the preparation of oncoming Mathematical modeling contest(also simply in order to complete the task of a daily blog✌( •̀ ω •́ )y), didn't took a lot of time(It means I can have time to sleep...).

It was accomplished all by myself, that means there is no reference to github's code. Well, great progress!

I committed it onto my own GitHub, which was not well organized.

Import Packages

import numpy as np
from keras.models import Sequential 
from keras.layers import Dense 
import matplotlib.pyplot as plt 
print ("Import finished")

Because the importing of Keras took a little bit more time, I need a hint that they've been successfully imported:

clipboard.png

Generating Data

Make them out of sequence in order to make random splitting,
Add some noise:

X = np.linspace(0, 2, 300) 
np.random.shuffle(X)
Y = 3 * X + np.random.randn(*X.shape) * 0.33

Data visualization

plt.scatter(X,Y)
plt.show()
print (X[:10],'\n',Y[:10])

clipboard.png

Define Train and Test Data

X_train,Y_train = X[:260],Y[:260]
X_test,Y_test = X[260:],Y[260:]

Establish LR Model
input and output dimensions are both set as 1

model = Sequential()
model.add(Dense(units=1, kernel_initializer="uniform", activation="linear", input_dim=1))
weights = model.layers[0].get_weights() 
w_init = weights[0][0][0] 
b_init = weights[1][0] 
print('Linear regression model is initialized with weights w: %.2f, b: %.2f' % (w_init, b_init)) 

see the default coefficients:

clipboard.png

Choose Loss-Function and Optimizer
Define loss as mean squared error, choose stochastic gradient descent as optimizer:

model.compile(loss='mse', optimizer='sgd')

Train Model
Run 500 epochs of iterations of sgd.

model.fit(X_train, Y_train, epochs=500, verbose=1)

The loss eventually stabilizes at around 0.0976:

clipboard.png

Test Model

Y_pred = model.predict(X_test)
plt.scatter(X_test,Y_test)
plt.plot(X_test,Y_pred)
plt.show()
weights = model.layers[0].get_weights() 
w_init = weights[0][0][0] 
b_init = weights[1][0] 
print('Linear regression model is trained with weights w: %.2f, b: %.2f' % (w_init, b_init)) 

The final weights are 3.00 and 0.03, very close to the setted one(3.00, 0.33), the error of 0.03 might caused by the noise.

clipboard.png

Use model

Input 1.66 as feature:

a = np.array([1.66])
Pre=model.predict(a)
print (Pre)

clipboard.png

Tomorrow I would change this script into multi-dimensional regression machine, which can solve multi-feature regression problems.


JackieFang
27 声望11 粉丝

✌( •̀ ω •́ )y