NAV Navbar


pip install losswise

Welcome to the Losswise API reference! By adding just a few lines of code to your ML / AI / optimization code, you get beautiful interactive visualizations, a tabular display of your models’ performance, and much more.

If you have any problems or questions please send us an email at

To get started, first install Losswise’s Python client as seen on the right (source code is available at

A minimal example

import losswise
import time
import random
session = losswise.Session(tag='my_special_lstm', params={'rnn_size': 512}, max_iter=10)
graph = session.graph('loss', kind='min')
for x in range(10):
    train_loss = 1. / (0.1 + x + 0.1 * random.random())
    test_loss = 1.5 / (0.1 + x + 0.2 * random.random())
    graph.append(x,{'train_loss': train_loss, 'test_loss': test_loss})

Each project in Losswise is associated with an API key. Once you sign up for Losswise you will automatically be given an API key. You can start logging to Losswise by simplying running the code on the right. Just remember to replace your_api_key with your desired API key from



Projects are the highest level organizational structure in Losswise. Projects are created right from your web browser in

Experiments within a single project should be directly comparable to each other. If they are unrelated, they belong in separate projects.

The main reason to use separate projects is that each project’s tabular dashboard automatically infers columns (e.g. min(loss), min(ppl), …) from the data logged to this project. If you log unrelated experiments to the same project, the sorting functionality for these columns becomes less meaningful, and you may end up with too many columns. The relevant project is specified in your code via API keys.


session = losswise.Session(tag='my_special_lstm', max_iter=10, params={'rnn_size': 512})

A sessions is simply an experiment that belongs to a project.

Although the max_iter argument is optional, it is recommended that you use it because it allows Losswise to estimate when your sessions will terminate.

The params argument is also optional and allows you to associate parameters with experiments. Any JSON serializable object will work here. You can provide a Python dictionary where some of the values are lists, for example. The Losswise UI allows you to inspect deeply nested parameter objects.

When a session object is created, Losswise will send heartbeat messages to make sure your code is still running. If multiple heartbeat messages are missed and session.done() was not called, Losswise will assume your program has crashed and will set this session’s status in the dashboard as “Cancelled”. If this was caused by a network outage, your code will continue running as normal. Losswise was designed to be non-intrusive and robust - the last thing we’d ever want to do is crash your program or slow it down.


graph_loss = session.graph('loss', kind='min')
graph_accuracy = session.graph('accuracy', kind='max')

You may create any number of graphs from a session object by calling the session.graph method.

The name of the quantity is question is the first argument to the session.graph.

The kind argument is optional. It can take on value of min or max. This argument is used to tell Losswise if the goal of the experiment is to minimize or maximize the quantities graphed here.

For the example on the right, Losswise will create a column for min(loss) and another one for max(accuracy), which can then be used to compare models.


graph_loss.append(x, {'train_loss': train_loss, 'test_loss': test_loss})

Points are logged by simply calling the .append(x,y) method on the graph object of interest. The first argument x is the iteration number of the experiment. The second argument y is a key value dictionary of the quantities’ names and values.

Keras plugin

# Full Keras + Losswise example
# Make sure to substitute your Losswise project's API key!
from keras.models import Sequential
from keras.layers import LSTM, Dense
import losswise
from losswise.libs import LosswiseKerasCallback
import numpy as np
data_dim = 16
timesteps = 8
num_classes = 10
# expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True,
               input_shape=(timesteps, data_dim)))
model.add(LSTM(32, return_sequences=True))
model.add(LSTM(32))  # return a single vector of dimension 32
model.add(Dense(10, activation='softmax'))
# Generate dummy training data
x_train = np.random.random((1000, timesteps, data_dim))
y_train = np.random.random((1000, num_classes))
# Generate dummy validation data
x_val = np.random.random((100, timesteps, data_dim))
y_val = np.random.random((100, num_classes)), y_train,
          batch_size=100, epochs=10,
          callbacks=[LosswiseKerasCallback(tag='keras test')],
          validation_data=(x_val, y_val))

You can easily monitor Keras training sessions using Losswise’s Keras callback extension plugin, as seen to the right.

The requisite Keras callback is initialized by calling LosswiseKerasCallback.

Buildkite integration

If you're using Buildkite and leave the `tag` parameter unset,
your git branch will be used as the session tag.
session = losswise.Session(max_iter=10)

Losswise offers a powerful integration with Buildkite (, a build pipeline tool that can initiate training sessions from Github / Bitbucket commits and run them on your on-premise GPU servers, AWS servers, or Google Cloud servers, with minimal configuration. This feature requires version 0.91 of the Losswise client library, so make sure to run pip install --upgrade losswise if you are running an old version of the client. You do not need to do anything extra to exploit the integration, Losswise simply checks for Buildkite environment variables and uses them if they exist.

In comparison with other build tools (CircleCI, Travis, etc.) Buildkite offers terrific support for running builds on your own machines. This makes Buildkite an ideal choice for machine learning projects that want to run training sessions on their own hardware.

Using Losswise and Buildkite together enables the following workflow: