3

Whether we are using Google Colab or accessing Cloud TPUs directly, the below program gives only limited information about the underlying TPUs:

import os
import tensorflow as tf

tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
print ('TPU address is', tpu_address)

def printTPUDevices():
   with tf.Session(tpu_address) as session:
      devices = session.list_devices()

      print ('TPU devices:')
      return devices

printTPUDevices()

Is there any documentation of programmatically or via bash commands to display more information, see this gist for e.g. https://gist.github.com/neomatrix369/256913dcf77cdbb5855dd2d7f5d81b84.

4

1 回答 1

1

The Cloud TPU system architecture is a bit different from GPU's so this level of information is not available.

Because the client talks to a remote TensorFlow server and uses XLA, client code doesn't need to change based on the available features on the TPU, the remote server will compile machine instructions based on the TPU's capabilities.

However the Cloud TPU Profiler does give a lower level view of the TPU for performance optimization. You can see a trace level view of what operations are using up memory and compute time.

于 2018-11-19T23:34:26.637 回答