发布时间:2022-08-19 12:08
以后别再说自己用LSTM()了!
所以问题是:在你拥有GPU资源的情况下(默认拥有),我应该选用哪一种呢?
答案是CuDNNLSTM!
In my case, training a model with LSTM took 10mins 30seconds. Simply switching the call from LSTM() to CuDNNLSTM() took less than a minute.
I also noticed that switching to CuDNNLSTM() speeds up model.evaluate() and model.predict() substantially as well.
同样的数据集,LSTM()耗时10分半,CuDNNLSTM() 耗时1分钟。。。同时预测与评估也会变快很多!!!
CuDNN:Fast LSTM implementation backed by CuDNN. Can only be run on GPU, with the TensorFlow backend.
CuDNNLSTM is faster (it uses the GPU support) but it has less options than LSTM (dropout for example)
参考:
https://stackoverflow.com/questions/49987261/what-is-the-difference-between-cudnnlstm-and-lstm-in-keras
HTML5+CSS3+Bootstrap开发静态页面嵌入android webview中
Knowledge-Aware Graph-Enhanced GPT-2 for Dialogue State Tracking论文笔记
【上海】saas 公司 90w 年包后端负责人 20-40x15 前后端工程师
基于IIC协议的4脚OLED模块的单片机驱动控制(含驱动程序)
关于STM32.. Error: L6218E: Undefined symbol xxxx(referred from xxxx.o).问题解决
车规级MCU进入「新周期」,中国本土供应商竞逐竞争力TOP10