n_clusters_
4
Yup got it working on 0.001% of the data set... Now to work out how to batch process this np.array. I suppose it is time to buy a 660 gtx and use tensorflow.
n_clusters_
4
Yup got it working on 0.001% of the data set... Now to work out how to batch process this np.array. I suppose it is time to buy a 660 gtx and use tensorflow.
Keras == "very few lines of code"
Bookmarked this video a while back for future reference. When you mentioned the batch process issue, it reminded me of this video presentation.
Possible batch processing solution @17:32 with the model.fit() function.
I assume that model.fit() might be a wrapper for some heavy parallel routines that segment your training dataset rows into manageable chunks to be batch-processed within the gpu as kernel functions. Saves you writing the for loops as .fit() takes care of all the vectorization and/or compiled loops inside the gpu space. Python loops are tremendously slow, you can vectorize as much as possible with numpy arrays, but translating for loops into vectorized code is a little tricky.
Keras does all the grunt work.
Keras great for protoyping.
Keras == "very few lines of code"