Jordan Shaw
http://jordanshaw.com
Accompaning Google Slides are viewable here
During the workshop, participants can follow along with an example project. If they would like, they can experiment and test capturing training data (PoseNet), cleaning their data, and training a model (Tensorflow.js RNN LSTM) within the example project’s dashboard. Following this, they can visualize their model predictions with p5.js using a particle system sketch.
If participants wish to follow along and experiment with the example project, the technical requirements are recommended:
- Download the workshop repo from GitHub
- Have node.js and NPM installed on your computer
- Have Visual Studio Code installed on your computer
- Have the Live Server VSCode Extention added to your IDE
Before the workshop run npm install
from within both the codeAndTrainModel
and predictNewPoses
directories.
Participants are also welcome to attend and follow along in a non-technical way. In this case, there are no prerequisites required beforehand.
- To run the
dashboard
andp5ParticleSketch
which is based on p5.js, run the VS Code Live Server Extension and go to your localhost:PORT - To run
codeAndTrainModel
andpredictNewPoses
enter the directory and install dependenciestfjs-node
andpubnub
by runningnpm install
. Than you can executenode index.js
in either directory. - Ensure to update the following variables to unique string values so you don't interfere with other data channels.
- in
dashboard/sketch.js
update thechannelName
andaiChannelName
to unique values - in
p5ParticleSketch/sketch.js
updateaiChannelName
the same string used for the variable in the dashboard directory. - in
predictNewPoses/index.js
updateaiChannelName
the same string used for the same variable names above. - Update file path to the model directory in
predictNewPoses/index.js
to your systems path to the model files.
During the “Training Machines for Autonomous Interactive Artworks” workshop, participants will undergo three steps required for using ML in interactive artwork. We will train a custom model to predict simplified human poses based on data points collected using PoseNet. We will create a generative artwork by feeding the generative data points into a P5.js creative coding sketch for visualization.
In the first section, we will identify and distill the appropriate use cases and technologies for your project and artwork based on an idea and creative goal. We’ll continue by exploring ways to source your training data, validate it and clean the dataset in preparation for training. We will also review potential different technologies that could be used in this process.
The second section will look at ways to use your data to train a new custom model for your project. We discuss how to choose the training method and network that could be best for your project. Specifically, in this workshop, we’ll dive deeper into the RNN LSTM neural network and why it was chosen for the example application.
Finally, once we have a trained model and confirm it predicts appropriate future data points, we will introduce ways our model could be used to create generative artwork. Part of this overview will introduce participants to different creative coding libraries and robotic toolkits as a medium for visualization and public engagement.
Some technologies introduced and discussed during this workshop will be Tensorflow.js, P5.js, Pubnub, PoseNet + BlazePose, LSTM networks, data collection, cleaning and validation for training. We’ll also look at some creative coding libraries and communication protocols to use while creating digital artwork, like Processing, Openframeworks, Touchdeisgner, Arduino, OSC, Midi and WebSockets.
This workshop will take participants step-by-step through an example project explicitly created for TMLS. A GitHub repository will be shared before the conference for participants to follow along during the workshop and experiment independently during each section.