Introduction
pip install losswise
Welcome to the Losswise API reference! By adding just a few lines of code to your ML / AI / optimization code, you get beautiful interactive visualizations, a tabular display of your models’ performance, and much more.
If you have any problems or questions please send us an email at support@losswise.com.
To get started, first install Losswise’s Python client as seen on the right (source code is available at https://github.com/Losswise/losswise-python).
A minimal example
import losswise
import time
import random
losswise.set_api_key("your_api_key")
session = losswise.Session(tag='my_special_lstm', params={'rnn_size': 512}, max_iter=10)
graph = session.graph('loss', kind='min')
for x in range(10):
train_loss = 1. / (0.1 + x + 0.1 * random.random())
test_loss = 1.5 / (0.1 + x + 0.2 * random.random())
graph.append(x,{'train_loss': train_loss, 'test_loss': test_loss})
time.sleep(1.)
session.done()
Each project in Losswise is associated with an API key. Once you sign up for Losswise you will automatically be given an API key. You can start logging to Losswise by simplying running the code on the right. Just remember to replace your_api_key
with your desired API key from https://losswise.com/dashboard.
Projects
losswise.set_api_key("your_api_key")
Projects are the highest level organizational structure in Losswise. Projects are created right from your web browser in https://losswise.com/dashboard.
Experiments within a single project should be directly comparable to each other. If they are unrelated, they belong in separate projects.
The main reason to use separate projects is that each project’s tabular dashboard automatically infers columns (e.g. min(loss)
, min(ppl)
, …) from the data logged to this project. If you log unrelated experiments to the same project, the sorting functionality for these columns becomes less meaningful, and you may end up with too many columns. The relevant project is specified in your code via API keys.
Sessions
session = losswise.Session(tag='my_special_lstm', max_iter=10, params={'rnn_size': 512}, track_git=False)
A session
instance is simply an experiment that belongs to a project. The Session
constructor takes the following input parameters:
Parameter | Type | Description |
---|---|---|
tag (optional) |
string | String identifier for experiment. By default, Losswise will try to use the git branch name as tag. |
max_iter (optional) |
integer | Number of iterations in an experiment, used to estimate experiment completion as well as simple graph smoothing. |
params (optional) |
dict | Used to associate hyperparameters with experiments. Any JSON serializable object will work here (including nesting). |
track_git (optional) |
bool | Should Losswise track git diff and the current branch? Defaults to true . |
When a session object is created, Losswise will send heartbeat messages to make sure your code is still running. If multiple heartbeat messages are missed and session.done()
was not called, Losswise will assume your program has crashed and will set this session’s status in the dashboard as “Cancelled”. If this was caused by a network outage, your code will continue running as normal. Losswise was designed to be non-intrusive and robust - the last thing we’d ever want to do is crash your program or slow it down.
Graphs
graph_loss = session.graph('loss', kind='min', display_interval=1)
graph_accuracy = session.graph('accuracy', kind='max', display_interval=1)
You may create any number of graphs from a session object by calling the session.graph
method, which takes the following parameters:
Parameter | Type | Description |
---|---|---|
name |
string | Name of the graph (eg. “loss” or “accuracy”), used as graph title in dashboard |
kind (optional) |
string | Specifies if we are interested in min or max values. Available values: min and max . |
display_interval (optional) |
integer | Intervals at which to log point, pointwise values are averaged within this interval |
All values past the first iteration are averaged using the previous display_interval
iterations. Setting display_interval=1
means that every iteration will be logged to Losswise, without smoothing. The display_interval
value is only used for graphs such that graph.append(...)
is called for every iteration x
(for example, batch loss): if you compute an accuracy value for a graph after each epoch, display_interval
will be ignored. The reason for having display_interval
is that logging the loss at each iteration causes graphs to be very noisy and load slowly: it’s just wasteful. The display_interval
value allows intermittent, smooth logging to Losswise for a better developer experience.
Points
graph_loss.append(x, {'train_loss': train_loss, 'test_loss': test_loss})
The graph.append
function takes in the following arguments
Parameter | Type | Description |
---|---|---|
x |
integer | Current iteration value |
y |
dict | key value dictionary of values at this iteration |
Images
seq = session.image_sequence(x=0, name="Person recognizer")
seq.append(pil_image,
metrics={'accuracy': 1},
outputs={'person': 'Lena'},
image_id=str(img_id) + "_img")
Image sequences are used to track visual information during training. This is especially useful for computer vision projects (eg. image segmentation, object detection, …).
An ImageSequence
object instance seq
can be created by calling the session.image_sequence
function, with the following arguments:
Parameter | Type | Description |
---|---|---|
x |
integer | Current iteration value |
name (optional) |
string | Descriptive name for image sequence, useful if logging multiple different image types during training. Defaults to "" . |
You can log image predictions from the seq
instance created above by calling seq.append(...)
which takes the following input parameters:
Parameter | Type | Description |
---|---|---|
image_pil |
PIL.Image | Image. See here for converting numpy to PIL.Image, and here for converting OpenCV image to PIL.Image. |
image_id (optional) |
string | Used to identify images, to ease comparison of image predictions throughout training period. Defaults to "" . |
outputs (optional) |
dict | String to string map used to track string predictions and outputs of image (eg. predicted class of image for image classification). |
metrics (optional) |
dict | String to number map used to track numeric metrics that help describe the prediction. |
Note for Keras users: you may access the Session
object by calling keras_callback_instance.session
. This instance member is initialized when the first points are logged to Losswise.
Images full example
import time
import random
import losswise
import numpy as np
from PIL import Image
# TODO: change this piece of code
losswise.set_api_key('YOUR API KEY')
max_iter = 20
session = losswise.Session(max_iter=max_iter,
params={'max_iter': max_iter, 'dropout': 0.3, 'lr': 0.01, 'rnn_sizes': [256, 512]})
graph = session.graph('loss', kind='min')
for x in range(max_iter):
train_loss = 1. / (0.1 + x + 0.1 * random.random())
test_loss = 1.5 / (0.1 + x + 0.2 * random.random())
graph.append(x, {'train_loss': train_loss, 'test_loss': test_loss})
time.sleep(0.5)
if x % 5 == 0:
seq = session.image_sequence(x=x, name="Test")
for img_id in range(5):
pil_image = Image.open("image.png")
seq.append(pil_image,
metrics={'accuracy': 1},
outputs={'name': 'Lena'},
image_id=str(img_id) + "_img")
session.done()
To run the script on the right, first download the image by doing the following:
wget https://upload.wikimedia.org/wikipedia/en/thumb/2/24/Lenna.png/220px-Lenna.png -O image.png
Running the script will generate the following view in the Losswise dashboard:
Keras plugin
# Full Keras + Losswise example
# Make sure to substitute your Losswise project's API key!
from keras.models import Sequential
from keras.layers import LSTM, Dense
import losswise
from losswise.libs import LosswiseKerasCallback
import numpy as np
losswise.set_api_key('your_api_key')
data_dim = 16
timesteps = 8
num_classes = 10
# expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True,
input_shape=(timesteps, data_dim)))
model.add(LSTM(32, return_sequences=True))
model.add(LSTM(32)) # return a single vector of dimension 32
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# Generate dummy training data
x_train = np.random.random((1000, timesteps, data_dim))
y_train = np.random.random((1000, num_classes))
# Generate dummy validation data
x_val = np.random.random((100, timesteps, data_dim))
y_val = np.random.random((100, num_classes))
model.fit(x_train, y_train,
batch_size=100, epochs=10,
callbacks=[LosswiseKerasCallback(tag='keras test', params={'lstm_size': 32}, track_git=True, display_interval=50)],
validation_data=(x_val, y_val))
You can easily monitor Keras training sessions using Losswise’s Keras callback extension plugin, as seen to the right.
The requisite Keras callback is initialized by calling LosswiseKerasCallback
, which takes the following input parameters:
Parameter | Type | Description |
---|---|---|
tag (optional) |
string | String identifier for experiment. By default, Losswise will try to use the git branch name as tag. |
params (optional) |
dict | Used to associate hyperparameters with experiments. Any JSON serializable object will work here (including nesting). |
track_git (optional) |
bool | Should Losswise track git diff and the current branch? Defaults to true . |
display_interval (optional) |
integer | Intervals at which to log point, pointwise values are averaged within this interval. This value will be used for all graphs created automatically by the LosswiseKerasCallback object. |
max_iter (optional) |
integer | Number of iterations in an experiment, used to estimate experiment completion as well as simple graph smoothing. |
Keras users that want more control should simply create their own Keras callback objects. It’s super easy, especially if you use the LosswiseKerasCallback object as a starting point: https://github.com/Losswise/losswise-python/blob/master/losswise/libs.py.
Build Runner
Losswise offers a powerful integration with GitHub, allowing you to initiate training sessions from Github and run them on your on-premise GPU servers, AWS servers, or Google Cloud servers, with minimal configuration. This feature requires the latest versions of the Losswise client library, so make sure to run pip install --upgrade losswise
if you are running an old version of the client.
In comparison with other build tools (CircleCI, Travis, etc.) Losswise offers terrific support for running builds on your own machines. This makes Losswise an ideal choice for machine learning projects that want to run training sessions on their own hardware.
Using Losswise and Github together enables the following workflow:
- Develop a model locally and commit changes.
- Push the model to a new branch on Github (e.g.
git push origin train/cnn-dilated-3
). - A Github webhook to Losswise kicks off training on your machine.
- Monitor the progress of the training on your Losswise dashboard, which also contains a direct link to your GitHub branch.
- Repeat the above steps as needed for exploring model / data preprocessing variations.
- Use Losswise’s smart sorting features to select the best model (according to your defined criteria).
- Click on the Github link in the row corresponding to this model. This will bring you directly to Github view of your commit’s diff.
- Merge the branch or create a PR.
- Repeat the above.
Install Agent
You can install the agent on any linux or mac operating system with our installation script:
wget "https://cdn.losswise.com/agent/install.sh" -qO - | \
sudo bash -s YOUR_POOL_TOKEN
Replace YOUR_POOL_TOKEN with the pool token found at https://losswise.com/setup/agents.
The installation script will find the correct binary to install for your operating system and architecture and place it at /usr/local/bin/losswise-agent
along with adding your agent pool token to /etc/losswise/agent.yaml
.
Start Agent
$ losswise-agent --help
Usage:
losswise-agent [OPTIONS]
Application Options:
-t, --token= Losswise agent pool token.
-n, --name= Custom agent name.
-b, --base-url= Custom base url.
-e, --env= Environment variables.
-c, --config-file= Config file to use.
-p, --build-file= Build pipeline file to use.
-d, --debug Print debug logs.
Help Options:
-h, --help Show this help message
You can configure your agent using flags or the configuration file at:
/etc/losswise/agent.yaml
.
All the flags are optional as long as your agent pool token is set in the agent configuration file, so you can simply run losswise-agent
or add it to be started on boot with systemd, upstart or init.d.
You can start as many agents as you like, on as many servers as you have. A common setup is to run an agent per GPU, running each agent with a flag like --env GPU=0
so the build knowns which GPU to target. This flag provided environment variable is added to each build that agent runs.
GitHub Setup
Go to your project’s webhook setup page for instructions on how to add a webhook to your GitHub repository.
Trigger builds
Simply push to a repo with a webhook and we’ll create a build and add it to your build queue. You’ll then see it listed on Losswise under Builds. If you have a free agent it will run the build.
By default without setting any flags your agent will run .losswise/build.sh
after cloning and checking out the branch that triggered the build. Make sure the server has permission to clone the repo and the .losswise/build.sh
exists.