We’re excited to share Mannequin Explorer – a robust graph visualization software designed that can assist you perceive and debug your ML fashions. With an intuitive, hierarchical visualization of even the biggest graphs, Mannequin Explorer permits builders to beat the complexities of optimizing fashions for edge units. That is the third weblog submit in our collection masking Google AI Edge developer releases. In the event you missed the primary two, make sure you take a look at the AI Edge Torch and Generative API blogs.
Developed initially as a utility for Google researchers and engineers, Mannequin Explorer is now publicly out there as a part of our Google AI Edge household of merchandise. The preliminary model of Mannequin Explorer presents the next:
- GPU-based rendering engine to visualise giant mannequin graphs
- Well-liked ML framework assist
- Runs instantly in Colab notebooks
- Adapter extension system to visualise extra mannequin codecs
- Overlay metadata (e.g., attributes, inputs/outputs, and so on) and custom-data (e.g. efficiency) instantly on nodes
- Highly effective UI characteristic suite designed that can assist you work quicker
On this weblog submit we’ll stroll via tips on how to get began with Mannequin Explorer and tips on how to make the most of Mannequin Explorer’s {custom} knowledge overlay API to debug and optimize your fashions. Additional documentation and examples can be found right here.
Getting began
Mannequin Explorer prioritizes a seamless person expertise. Its easy-to-install PyPI package deal runs regionally in your gadget, in Colab, and in a Python file, boosting the privateness and safety of your mannequin graphs.
Run regionally in your gadget
$ pip set up ai-edge-mannequin-explorer
$ mannequin-explorer
Beginning Mannequin Explorer server at http://localhost:8080
These instructions will begin a server at localhost:8080 and open the Mannequin Explorer net app in a browser tab. See extra information about Mannequin Explorer command strains within the command line information.
Upon getting a localhost server working, add your mannequin file out of your laptop (codecs supported embrace these utilized by JAX, PyTorch, TensorFlow and TensorFlow Lite) and choose the most effective adapter to your mannequin by way of the ‘Adapter’ drop down menu on the house web page. Go to right here to learn to make the most of the Mannequin Explorer adapter extension system to visualise unsupported mannequin codecs.
Run in Colab notebooks
# Obtain your fashions (this instance makes use of an Efficientdet TFLite mannequin)
import os
import tempfile
import urllib.request
tmp_path = tempfile.mkdtemp()
model_path = os.path.be a part of(tmp_path, 'mannequin.tflite')
urllib.request.urlretrieve("https://storage.googleapis.com/tfweb/model-graph-vis-v2-test-models/efficientdet.tflite", model_path)
# Set up Mannequin Explorer
pip set up ai-edge-mannequin-explorer
# Visualize the downloaded EfficientDet mannequin
import model_explorer
model_explorer.visualize(model_path)
After working the cell, Mannequin Explorer will probably be displayed in an iFrame embedded in a brand new cell. In Chrome, the UI may even present an “Open in new tab” button you could click on to point out the UI in a separate tab. Go to right here to be taught extra about working Mannequin Explorer Colab.
Visualize fashions by way of the Mannequin Explorer API
The model_explorer
package deal supplies handy APIs to allow you to visualize fashions from information or from a PyTorch module, and a decrease degree API to visualise fashions from a number of sources. Be sure that to put in it first by following the set up information. To be taught extra take a look at the Mannequin Explorer API information.
Beneath is an instance for visualizing PyTorch fashions. Visualizing PyTorch fashions requires a barely totally different method because of their lack of an ordinary serialization format. Mannequin Explorer presents a specialised API to visualise PyTorch fashions instantly, utilizing the ExportedProgram from torch.export.export.
import model_explorer
import torch
import torchvision
# Put together a PyTorch mannequin and its inputs
mannequin = torchvision.fashions.mobilenet_v2().eval()
inputs = (torch.rand([1, 3, 224, 224]),)
ep = torch.export.export(mannequin, inputs)
# Visualize
model_explorer.visualize_pytorch('mobilenet', exported_program=ep)
Irrespective of which manner you visualize your fashions, below the hood Mannequin Explorer implements GPU-accelerated graph rendering with WebGL and three.js that achieves a easy, 60 FPS visualization expertise even with graphs containing tens of hundreds of nodes.
Debug efficiency and numeric accuracy with node knowledge overlay
A key Mannequin Explorer characteristic is its means to overlay per-node knowledge on a graph, permitting you to kind, search, and stylize nodes utilizing the values in that knowledge. Mixed with the hierarchical view, per-node knowledge overlay allows you to rapidly slender down efficiency or numeric bottlenecks. The instance beneath reveals the imply squared error of a quantized TFLite mannequin versus its floating level counterpart. Utilizing Mannequin Explorer, you are capable of rapidly determine that the standard drop is close to the underside of the graph, and alter your quantization methodology as wanted. Let’s stroll via tips on how to put together and visualize {custom} node knowledge.
This per-node knowledge overlay permits customers to rapidly determine efficiency or numeric points inside a mannequin.
Put together {custom} node knowledge
We offer a set of Python APIs that can assist you create {custom} node knowledge and serialize it right into a JSON file. From a excessive degree, the {custom} node knowledge has the next construction:
ModelNodeData: The highest-level container storing all the info for a mannequin. It consists of a number of GraphNodeData objects listed by graph ids.
GraphNodeData: Holds the info for a particular graph inside the mannequin. It contains:
- outcomes: Shops the {custom} node values, listed by both node ids or output tensor names.
- thresholds or gradient: shade configurations that affiliate every node worth with a corresponding node background shade or label shade, enabling visible illustration of the info.
Beneath is a minimal instance of getting ready {custom} node knowledge utilizing the node_data_builder
API. For in-depth documentation for getting ready {custom} node knowledge go to node_data_builder.py in our Github repo.
from model_explorer import node_data_builder as ndb
# Populate values for the primary graph in a mannequin.
main_graph_results: dict[str, ndb.NodeDataResult] = {}
main_graph_results['node_id1'] = ndb.NodeDataResult(worth=100)
main_graph_results['node_id2'] = ndb.NodeDataResult(worth=200)
main_graph_results['any/output/tensor/name/'] = ndb.NodeDataResult(worth=300)
# Create a gradient shade mapping.
#
# The minimal worth in `main_graph_results` maps to the colour with cease=0.
# The utmost worth in `main_graph_results` maps to the colour with cease=1.
# Different values maps to a interpolated shade in-between.
gradient: checklist[ndb.GradientItem] = [
ndb.GradientItem(stop=0, bgColor='yellow'),
ndb.GradientItem(stop=1, bgColor='red'),
]
# Assemble the info for the primary graph.
main_graph_data = ndb.GraphNodeData(
outcomes=main_graph_results, gradient=gradient)
# Assemble the info for the mannequin.
model_data = ndb.ModelNodeData(graphsData={'most important': main_graph_data})
# It can save you the info to a json file.
model_data.save_to_file('path/to/file.json')
It’s also possible to visualize {custom} node knowledge by making a config
object and passing it to the visualize_from_config
API.
import model_explorer
from model_explorer import node_data_builder as ndb
# Create a `ModelNodeData` as proven in earlier part.
model_node_data = ...
# Create a config.
config = model_explorer.config()
# Add mannequin and {custom} node knowledge to it.
(config
.add_model_from_path('/path/to/a/mannequin')
# Add node knowledge from a json file.
# A node knowledge json file could be generated by calling `ModelNodeData.save_to_file`
.add_node_data_from_path('/path/to/node_data.json')
# Add node knowledge from knowledge class object
.add_node_data('my knowledge', model_node_data))
# Visualize
model_explorer.visualize_from_config(config)
Early Adoption
Up to now few months, we’ve labored carefully with early adoption companions together with Waymo and Google Silicon to enhance our visualization software. Notably, Mannequin Explorer has performed an important position in serving to these groups debug and optimize on-device fashions like Gemini Nano presently deployed in manufacturing.
What’s subsequent?
Within the coming months we’ll deal with enhancing the core by refining key UI options like graph diffing and modifying, empowering extensibility by permitting you to seamlessly combine your individual instruments into Mannequin Explorer, and open-sourcing the Mannequin Explorer front-end. That is the third and last submit of the AI Edge 3-part weblog collection. To remain updated on the newest AI Edge updates go to the AI Edge website.
Acknowledgements
This work is a collaboration throughout a number of purposeful groups at Google. We want to prolong our because of engineers Na Li, Jing Jin, Eric (Yijie) Yang, Akshat Sharma, Chi Zeng, Jacques Pienaar, Chun-nien Chan, Jun Jiang, Matthew Soulanille, Arian Arfaian, Majid Dadashi, Renjie Wu, Zichuan Wei, Advait Jain, Ram Iyengar, Matthias Grundmann, Cormac Brick, Ruofei Du, our Technical Program Supervisor, Kristen Wright, and our Product Supervisor, Aaron Karp. We’d additionally prefer to thank the UX staff together with Zi Yuan, Anila Alexander, Elaine Thai, Joe Moran and Amber Heinbockel.