Ensemble Forecast With Keras On Gpu
I am trying to make an ensemble forecast with equally build keras models. The Data for the single NNs are of the same shape. I want to use the GPU because the models should be trai
Solution 1:
Firstly, you are trying to build nummodels
times models and concate their output. And secondly, you try to .fit
the model with multi-input and try to get multi-output - that means, your every model would take separately inputs and the merged model would give nummodels
times output. So, based on my understanding of your question here is one possible solution for you.
Models
import tensorflow as tf
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Dense, Embedding, GlobalAveragePooling1D
# Define Model A
input_layer = Input(shape=(10,))
A2 = Embedding(1000, 64)(input_layer)
A2 = GlobalAveragePooling1D()(A2)
model_a = Model(inputs=input_layer, outputs=A2, name="ModelA")
# Define Model B
input_layer = Input(shape=(10,))
A2 = Embedding(1000, 64)(input_layer)
A2 = GlobalAveragePooling1D()(A2)
model_b = Model(inputs=input_layer, outputs=A2, name="ModelB")
# Define Model C
input_layer = Input(shape=(10,))
A2 = Embedding(1000, 64)(input_layer)
A2 = GlobalAveragePooling1D()(A2)
model_c = Model(inputs=input_layer, outputs=A2, name="ModelC")
Merge Model
import numpy as np
import tensorflow as tf
# concate their output
model_concat = tf.keras.layers.concatenate(
[
model_a.output, model_b.output, model_c.output
], name='target_concatenate')
# final activation layer
final_out1 = Dense(1, activation='sigmoid')(model_concat)
final_out2 = Dense(1, activation='sigmoid')(model_concat)
final_out3 = Dense(1, activation='sigmoid')(model_concat)
# whole models
composed_model = tf.keras.Model(
inputs=[model_a.input, model_b.input, model_c.input],
outputs=[final_out1, final_out2, final_out3]
)
# model viz
tf.keras.utils.plot_model(composed_model)
I/O
# compile the model
composed_model.compile(loss='binary_crossentropy', optimizer='adam')
# dummies
input_array1 = np.random.randint(1000, size=(32, 10))
input_array2 = np.random.randint(1000, size=(32, 10))
input_array3 = np.random.randint(1000, size=(32, 10))
# get some prediction - multi-input / multi-output
output_array1, output_array2, output_array3 = composed_model.predict([input_array1,
input_array2,
input_array3])
Check these answers for more understanding.
- Multi-input Multi-output Model with Keras Functional API
- Merge 3 Deep Network and Train End-to-End
- Training Multiple Input and Output Keras Model
Based on your comment and edited part of your question, try as follows:
models = []
for i in range(nummodels):
....
....
model = Model(inputs=input_layer, outputs=A2, name="Model"+str(i+1))
models.append(model)
# inputs.append(model.input)
# outputs.append(model.output)
# model_concat = tf.keras.layers.concatenate(outputs, name='target_concatenate')
model_concat = tf.keras.layers.concatenate(
[
models[0].output, models[1].output, models[2].output
], name='target_concatenate')
final_outputs=[]
for i inrange(nummodels):
final_outputs.append(Dense(1, activation='sigmoid')(model_concat))
# whole models
composed_model = tf.keras.Model(inputs=[models[0].input,
models[1].input,
models[2].input],
outputs=final_outputs)
Post a Comment for "Ensemble Forecast With Keras On Gpu"