I installed Ubuntu 16.04 and followed the instructions on pp 316-319 of Chollet's "Deep Learning with R" with (more or less) no incident. This involved downloading CUDA 9 and cuDNN 7. I did the R package installs, etc. Finally, I downloaded and ran the example code from
https://keras.rstudio.com/. This seemed to work. That is, it produced outcome similar to that shown on the URL. My problem is that I can't tell whether it is executing this through CPU or GPU. It seems like there are 3 other relevant pieces of information:
1) at the start of model fitting, I get the following error messages:
2018-08-06 17:08:29.363806: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-08-06 17:08:29.529512: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-08-06 17:08:29.529930: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties:
name: GeForce GTX 750 Ti major: 5 minor: 0 memoryClockRate(GHz): 1.0845
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.52GiB
2018-08-06 17:08:29.529942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2018-08-06 17:08:36.926419: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-08-06 17:08:36.926442: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958] 0
2018-08-06 17:08:36.926447: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: N
2018-08-06 17:08:36.926579: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1270 MB memory) -> physical GPU (device: 0, name: GeForce GTX 750 Ti, pci bus id: 0000:01:00.0, compute capability: 5.0)
2) System elapsed time for the fit was about 52 seconds
3) Chollet's suggestion of opening a separate terminal window and entering
watch -n 5 NDVIDIA-smi -a --display=utilization
generates the error message:
sh: 1: Nvidia-smi not found
The mention of GeForce in (1) seems encouraging. Does any one know whether the 51 seconds in (2) is a CPU or a GPU time? If the GPU is truly working, then what to make of (3)?
Any help would be much appreciated.