readthedocs: https://utensor-cgen.readthedocs.io/en/latest/
- with
setup.py
$ python setup.py install
- with pip
$ pip install utensor_cgen
- with pipenv
# install `utensor_cgen` (develop mode)
$ PIPENV_VENV_IN_PROJECT=1 pipenv install -d
# spawn a subshell and activate virtualenv
$ pipenv shell
# get help message of `utensor-cli`
$ utensor-cli -h
Troubleshooting with pipenv
If you have troubles with installation using pipenv, try
$ PIPENV_VENV_IN_PROJECT=1 pipenv install -d --skip-lock
there is known issue of pip and pipenv, plz refer to this issue for detail
- short answer: downgrade to
pip==18.0
may help :)
- short answer: downgrade to
Tensorflow requires
setuptools<=39.1.0
(the latest is40.4.3
by the time this README is writen)- plz downgrade to
setuptools==39.1.0
- my recommendation is to use
virtualenv
- plz downgrade to
$ utensor-cli show <model.pb>
Show all nodes and detailed information of given pb file or a :class:`.uTensorGraph` pickle file
Run utensor-cli show --help
for detailed information.
IMPORTANT: pb
file is deprecated in favor of Tensorflow 2.x, please refer to End-to-End Training with Keras for detail
$ utensor-cli convert <model.pb> \
--output-nodes=<node name>[,<node name>,...] \
[--config=config.toml]
Convert given pb file into cpp/hpp files.
Note that --output-nodes
is required options. It's the names of
nodes you want to output, seperated by comma for multiple values.
In graph theory terminology, they are leaf
nodes of your graph.
Use --config
to pass a configuration file to the cli, you can use generate-config
command to generate one (see below).
$ utensor-cli convert simple_model.pb --output-nodes=pred,logits
Run utensor-cli convert --help
for detailed information.
utensor-cli
use toml
as configuration format.
You can generate configuration file of given target as following:
$ utensor-cli generate-config --target <target name> [-o filename.toml]
This command will generate a toml
file listing all configurable values with its defaults.
You can modify the value and pass the file to cli with --config
flag.
# generate config file
$ utensor-cli generate-config --target utensor -o myconfig.toml
# after editting myconfig.toml
$ utensor-cli convert mymodel.pb --config=myconfig.toml --output-nodes=output,...
Use :mod:`utensor_cgen` as Library
With :class:`.uTensorGraphMatcher`, performing isomorphic subgraph matching along with replacing or manipulating the matched subgraph(s) takes just a few line of code:
from utensor_cgen.matcher import uTensorGraphMatcher
# `pattrn_ugraph` is the pattern to match with
pattrn_ugraph = ...
matcher = uTensorGraphMatcher(pattrn_ugraph)
# a larget graph to perform subgraph match
subject_ugraph = ...
# matches is a list of `uTensorGraphMatch` objects
matches = matcher.match_all(subject_ugraph)
if matches:
# do stuff with the matches
Note: we'll use operation/node/layer interchangeably in the documentation
- It's commonly seen pattern in convolution neural network (
CNN
),conv -> relu -> pooling
. That is, a 2D convolution followed by a relu layer and then a pooling down sampling layer. - With our :class:`.uTensorGraphMatcher`, you can locate such pattern in your
CNN
model and fuse/replace matched nodes into one optimized :class:`.QuantizedFusedConv2DMaxpool` node.
- Left: original graph
- Middle: matched convolution layer
- Right: replace the matched layer with specialized
QuantizedFusedConv2DMaxpool
node
- Though
dropout
is an effective technique to improve training performance of your model, it's not necessary during inference phrase. - In the mainstream frameworks such as Tensorflow or PyTorch,
an
dropout
layer is typically implemented with other elementary operations/nodes. As a result, finding and removing those nodes for inference optimization (both in model size and prediciton time) is not trivial and error prone. - With our :class:`.uTensorGraphMatcher`, you can find and remove the dropout
nodes as illustrated in the following picture.
- Left: original graph with dropout Layers
- Middle: matched dropout layers
- Right: graph with dropout layers removed
We use mainly Tensorflow for declaring the pattern graph for matcher now.
High-level graph builder is on its way, see Future Works for detail.
Considering following simple multi layers perceptron (simple_mnist.pb):
Once enabled the optimization transformer, tensor_alloc
, an offline tensor memory allocation planner,
utensor-cli
will generate uTensor
runtime codes that use following optimized allocation plan:
- y-axis: tensor names ordered by topological sorting
- x-axis: these are the memory span occupied by each tensor, that is, the memory address offset and
the size of the tensor
- End-to-End Training with Keras
- Extending uTensor Backend by Adding Custom Operators
- Wrighting Plugins: Component Registration
Keras (Recommended)
Please refer to End-to-End Training with Keras for detail
- Freeze your tensorflow.Graph
- please refer to this issue track for detail
- especially this comment by Robin2091
- Follow instructions in :ref:`install` section to install :mod:`utensor_cgen`
- then utensor-cli should be available in your console
- Inspect your pb file to find the output node
# verbose mode $ utensor-cli show graph.pb # or oneline mode $ utensor-cli show graph.pb --oneline
- convert the protobuf file to C/C++ source code with utensor-cli
- supose the output node is
pred
in graph.pb$ utensor-cli convert --output-nodes=pred graph.pb
- Compile your application code with generated C/C++ and weights files
- You should find your model C/C++ and weights files in directories models and constants respectively
- follow the steps in :ref:`install_dev` section
- run tests as following
# run with `make` $ make tests # run with `pipenv` $ pipenv run pytest -m 'not slow_test and not deprecated' tests
- High-level graph builder api for building :class:`.uTensorGraph`.
- Currently
utensor_cgen
usesTensorFlow
api for building IR graph,uTensorGraph
. - With high-level graph builder, users can build their
uTensorGraph
easily and do not need to take care of the integrity of the graph. The builder will take care of it automatically.
- Currently
- Automaic-Update TFLite fbs file