Keras实现非线性回归

参考代码:

import keras
import numpy as np
from keras.layers import Dense
from keras.models import Sequential

print("keras version=>", keras.__version__)
# keras version=> 2.3.1

###########生成数据#######
np.random.seed(0)

trainx = np.random.random((1000, 5))
weight = [[1], [2], [3], [4], [5]]
bias = 0.25
noise = np.random.normal(0, 0.0002, size=(1000,))
trainy = np.dot(trainx, weight) + bias
trainy += noise.reshape(trainy.shape)

validx = np.random.random((200, 5))
validy = np.dot(validx, weight) + bias

print("input data shape: trainx=>{},trainy=>{},validx=>{},validy=>{}".
      format(trainx.shape, trainy.shape, validx.shape, validy.shape))
# input data shape: trainx=>(1000, 5),trainy=>(1000, 1),validx=>(200, 5),validy=>(200, 1)

#定义模型
model = Sequential()
model.add(Dense(units=1, input_dim=5, use_bias=True))
print(model.summary())
# Model: "sequential_1"
# _________________________________________________________________
# Layer (type)                 Output Shape              Param #
# =================================================================
# dense_1 (Dense)              (None, 1)                 6
# =================================================================
# Total params: 6
# Trainable params: 6
# Non-trainable params: 0

#训练模型
model.compile(loss='mse', optimizer='sgd')
model.fit(trainx, trainy, epochs=100, batch_size=10, validation_data=(validx, validy))
# Train on 1000 samples, validate on 200 samples
# Epoch 1/100
# 1000/1000 [==============================] - 0s 94us/step - loss: 7.9104 - val_loss: 0.9224
# Epoch 2/100
# 1000/1000 [==============================] - 0s 47us/step - loss: 0.9381 - val_loss: 0.7498
# Epoch 3/100
# ...
# ...
# Epoch 95/100
# 1000/1000 [==============================] - 0s 62us/step - loss: 5.4555e-07 - val_loss: 4.9707e-07
# Epoch 96/100
# 1000/1000 [==============================] - 0s 47us/step - loss: 4.7780e-07 - val_loss: 4.2271e-07
# Epoch 97/100
# 1000/1000 [==============================] - 0s 63us/step - loss: 4.1677e-07 - val_loss: 3.6716e-07
# Epoch 98/100
# 1000/1000 [==============================] - 0s 62us/step - loss: 3.6500e-07 - val_loss: 3.1522e-07
# Epoch 99/100
# 1000/1000 [==============================] - 0s 62us/step - loss: 3.2027e-07 - val_loss: 2.7231e-07
# Epoch 100/100
# 1000/1000 [==============================] - 0s 62us/step - loss: 2.8131e-07 - val_loss: 2.3556e-07

W, b = model.layers[0].get_weights()
print('W=', W, 'b=', b)
# W= [[0.99923134]
#  [1.9992452 ]
#  [2.9992535 ]
#  [3.9993658 ]
#  [4.999247  ]]
# b= [0.2518842]



个人资料
感情洁癖
等级:5
文章:4篇
访问:1.9k
排名: 30
上一篇: Keras 实现多元线性回归
猜你感兴趣的圈子:
深度学习交流圈
标签: loss、0s、epoch、trainy、shape、面试题
隐藏