Writing a Skill for Misty

Misty's a pretty capable robot on her own, but the exciting part of working with Misty is seeing her run the skills you create for her. Check out the Misty Community GitHub repo for example skills (including a fun Python-based skill that gives Misty a voice using the Google Text-to-Speech API).

Creating your own skill for Misty typically involves two things: getting data from Misty via WebSocket connections and sending commands to Misty using her API. This topic walks you through both sides of this process.

Sending Commands to Misty

To send API commands to Misty in a skill, you can use the JavaScript API or this Python wrapper. To experiment with the API, you can also use a REST client such as Postman and send GET and POST commands to Misty directly.

Misty's API includes commands for:

  • Display and light control
  • Audio control
  • Face detection, training, and recognition
  • Locomotion
  • Mapping
  • Head movement
  • Configuration and information

The Misty I GitHub repo contains a variety of sample skills that you can use to test and adapt into your own custom uses.

We supply two helper tools that make it easy to develop JavaScript skills for Misty:

  • lightClient.js - The LightClient tool simplifies JavaScript access to the REST endpoints for sending commands to the robot
  • lightSocket.js - The LightSocket tool streamlines opening, connecting, and subscribing to a WebSocket to receive data back from the robot

Get both tools at the Misty I GitHub repo

Using the LightClient JS Helper

Both the lightClient.js and lightSocket.js files should typically be located in a "tools" or "assets" folder. It’s important to reference the files prior to your application file in your .html page. For example:

<script src="tools/lightClient.js"></script>
<script src="tools/lightSocket.js"></script>
<script src="app.js"></script>

The first step to creating an external skill is to create an instance of the LightClient class, passing in your robot's IP address and the amount of time in ms you want your program to wait before timing out if no response is detected (the default is 30 seconds).

let client = new LightClient("[robot IP address]", 10000);

Once you create an instance of LightClient, it's simple to send requests to Misty’s REST endpoints. Most of the URL for Misty’s REST commands are built into LightClient:

http://{ipAddress}/api/

In order to use a specific endpoint, just pass in the rest of the URL. For example, you can do the following to send a GET request to the GetDeviceInformation() command:

client.GetCommand("info/device", function(data) {
    console.log(data);
});

Here’s another example of using LightClient to send a GET request to Misty, this time to obtain a list of the images currently stored on the robot:

client.GetCommand("images", function(data) {
    console.log(data);
});

You will also want to send POST requests to the robot. For a POST command, in order to send data along with the request, just pass it to lightClient.PostCommand as the second argument. Be sure to use the JSON.stringify() method first, in order to convert the JavaScript value(s) to a JSON string.

For example, we can send a POST request to the ChangeLED() endpoint to change the color of Misty's chest logo LED to blue. If there are no errors, the callback returns true, and we log a success message.

Specify the RGB values and convert the data to a JSON string:

let data = {
    "red": 0,
    "green": 0,
    "blue": 255
};
payload = JSON.stringify(data);

Send the request, including the data:

client.PostCommand("led/change", payload, function(result) {
    if(result) {
        console.log("Request Successful")
    }
});

Using the LightSocket JS Helper

Both the lightClient.js and lightSocket.js files should typically be located in a "tools" or "assets" folder. It’s important to reference the files prior to your application file in your .html page. For example:

<script src="tools/lightClient.js"></script>
<script src="tools/lightSocket.js"></script>
<script src="app.js"></script>

As we did for LightClient, the first step in using lightSocket.js is to create an instance of the LightSocket class, passing in your robot's IP address. Then, you call the Connect() method to open a WebSocket connection.

let socket = new LightSocket(ip);
socket.Connect();

In order to subscribe to a WebSocket using LightSocket, simply call the Subscribe() method. The arguments passed to the function correspond to the properties of subscribeMsg described in "Getting Data from Misty." See the function below for reference:

socket.Subscribe = function (eventName, msgType, debounceMs, property, inequality, value, returnProperty, eventCallback)

Here we create a TimeOfFlight WebSocket subscription for the center time-of-flight sensor, and log the data as we receive it:

socket.Subscribe("CenterTimeOfFlight", "TimeOfFlight", 100, "SensorPosition", "=", "Center", null, function(data) {
    console.log(data);
});

It's always best practice to unsubscribe to the WebSocket connection after use, so at the end of your script, be sure to call the Unsubscribe() method:

socket.Unsubscribe("CenterTimeOfFlight");

Getting Data from Misty

A WebSocket connection provides a live, continuously updating stream of data from Misty. When you subscribe to a WebSocket, you can get data for your robot ranging from distance information to face detection events to movement and more.

You can directly observe WebSocket data in your browser's JavaScript console, by connecting your robot to the API Explorer, but to use WebSocket data in a skill, you'll need to subscribe to it programmatically, in your code. We'll walk through this process, using the tofApp.js sample. You can download this JavaScript sample here.

To subscribe to a WebSocket data stream, you must first open the WebSocket, then send a message to specify the exact data you want to receive. For some WebSocket data, you must also send a REST command to the robot so it starts generating the data. For the time-of-flight sensor data that the tofApp.js sample uses, sending a REST command is not required, because Misty's time-of-flight sensors are always on.

IMPORTANT! For the most current version of the tofApp.js sample code, always check our GitHub repo.

Subscribing & Unsubscribing to a WebSocket

The first thing the tofApp.js sample does is to construct the message that subscribes to the exact WebSocket data we want.

The Type property is the name of the desired data stream. Misty's available WebSocket data stream types are described below. Currently, they include:

  • TimeOfFlight
  • ComputerVision
  • FaceDetection (deprecated)
  • FaceRecognition (deprecated)
  • LocomotionCommand
  • HaltCommand
  • SelfState
  • WorldState

The optional DebounceMs value specifies how frequently the data is sent. If you don't specify a value, by default the data is sent every 250ms. In this case, we've set it to be sent every 100ms.

The EventName property is a name you specify for how your code will refer to this particular WebSocket instance. Message and ReturnProperty are also optional values.

For time-of-flight subscriptions, you must also include EventConditions. These specify the location of the time of flight sensor being accessed by changing Value to Right, Center, Left, or Back. This sample code subscribes to the center time-of-flight sensor only.

After creating the subscribe message, the sample also creates an unsubscribe message. When it's no longer needed, unsubscribing from a WebSocket data stream is a good practice to avoid creating performance issues. The unsubscribe message will be sent when the skill is done using the data.

//Use this variable to hold the IP address for the robot.
var ip = "00.0.0.000";

//Create a message to subscribe to the desired WebSocket data.
var subscribeMsg = {
  "Operation": "subscribe",
  "Type": "TimeOfFlight",
  "DebounceMs": 100,
  "EventName": "CenterTimeOfFlight",
  "Message": "",
  "ReturnProperty": null,
  "EventConditions":
  {
    "Property": "SensorPosition",
    "Inequality": "=",
    "Value": "Center"
  }
};

//Create a message to unsubscribe to the data when done.
var unsubscribeMsg = {
  "Operation": "unsubscribe",
  "EventName": "CenterTimeOfFlight",
  "Message": ""
};

//Format the messages as JSON objects.
var subMsg = JSON.stringify(subscribeMsg);
var unsubMsg = JSON.stringify(unsubscribeMsg);

After constructing the messages, they are formatted as JSON objects, so they are ready to send once the WebSocket is open.

Opening & Closing a WebSocket

Having constucted the subscribe and unsubscribe messages, the tofApp.js sample next attempts to open a WebSocket connection. Once the WebSocket is open, it sends the JSON-formatted “subscribe” message.

Once you've successfully subscribed to a data stream, you can use the socket.onmessage() function to handle the data received back from the robot. In this example, we simply log the received data to the console. For a real skill, you could instead parse the event data and write a conditional function based on a particular property value to do something when a condition is met.

In the sample, after a specified number of messages are received, we unsubscribe to the data stream and close the WebSocket connection. Alternately, because a given WebSocket could be used for multiple data subscriptions, you could keep the WebSocket open after unsubscribing and only close it when you are done entirely.

//Set the initial WebSocket message count to 0, as we're
//only keeping this WebSocket open for 10 messages total.
var messageCount = 0;

var socket;

function startTimeOfFlight() {
    //Create a new WebSocket connection to the robot.
    socket = new WebSocket("ws://" + ip + "/pubsub");

    //When the WebSocket's open, send the subscribe message.
    socket.onopen = function(event) {
      console.log("WebSocket opened.");
      socket.send(subMsg);
    };

    //Handle the WebSocket data from the server.
    //Send the unsubscribe message when we're done,
    //then close the socket.
    socket.onmessage = function(event) {
      var message = JSON.parse(event.data).message;
      messageCount += 1;
      console.log(message);
      if (messageCount == 10) {
         socket.send(unsubMsg);
        socket.close();
      }
    };

    //Handle any errors that occur.
    socket.onerror = function(error) {
      console.log("WebSocket Error: " + error);
    };

    //Do something when the WebSocket is closed.
    socket.onclose = function(event) {
      console.log("WebSocket closed.");
    };
};

WebSocket Types & Sample Data

The following are Misty's available WebSocket data stream types. You can filter all WebSocket options so (a) they return only a specified subset of the data and (b) check current values before the data is sent.

Note: All of Misty's WebSocket data structures are subject to change.

TimeOfFlight

Misty has four time-of-flight sensors that provide raw proximity data (in meters) in a single stream. The TimeOfFlight WebSocket sends this data any time a time-of-flight sensor is triggered. It is possible for proximity data to be sent as frequently as every 70 milliseconds, though it can be significantly slower. It is not sent at timed intervals.

Sample time-of-flight sensor data:

TimeOfFlight{
    "EventName":"TimeOfFlight",
    "Message":{
        "Created":"2018-03-30T20:36:46.5816862Z",
        "DistanceInMeters":0.184,
        "Expiry":"2018-03-30T20:36:46.9316897Z",
        "SensorId":"CD727A0A",
        "SensorPosition":"Right"
    },
    "Type":"TimeOfFlight"
}

ComputerVision (Beta)

You can subscribe to the ComputerVision WebSocket to obtain data on both face detection and face recognition events.

The EventName value is the name you provide when you register the WebSocket connection.

If face recognition is running on the robot, and a previously trained face is recognized, the PersonName value is the name previously assigned to that face. The PersonName value is unknown_person if an untrained/unknown face is detected. The PersonName value is null if face recognition is not currently running.

TrackId is reserved data that may change in the future.

Sample ComputerVision data for a face recognition event:

ComputerVision{
    "EventName":"MyFaceRecognition",
    "Message":{
        "Bearing":-3,
        "Created":"2018-07-02T16:26:20.1718422Z",
        "Distance":71,
        "Elevation":3,
        "Expiry":"2018-07-02T16:26:20.9218446Z",
        "PersonName":"Barkley",
        "SensorId":null,
        "SensorName":null,
        "TrackId":0
    },
    "Type":"ComputerVision"
}

FaceDetection (Deprecated)

Note: The FaceDetection WebSocket is deprecated. Use the ComputerVision WebSocket instead.

The FaceDetection WebSocket returns only raw face detection data. Currently, this sensory data is not aggregated with other face data, so there may be empty and null fields.

The FaceDetection WebSocket data is sent only upon a sensory message trigger. It is not sent at timed intervals. The approximate transmission rate of FaceDetection data is 4x/second, but this timing can vary.

Sample face detection data:

FaceDetection{
    "EventName":"FaceDetection",
    "Message":{
        "Bearing":-3,
        "Created":"2018-04-02T16:25:00.6934206Z",
        "Distance":71,
        "Elevation":3,
        "Expiry":"2018-04-02T16:25:01.4434254Z",
        "FaceId":3,
        "PersonName":null,
        "SensorId":null
    },
    "Type":"FaceDetection"
}

FaceRecognition (Deprecated)

Note: The FaceRecognition WebSocket is deprecated. Use the ComputerVision WebSocket instead.

The FaceRecognition WebSocket returns only raw face recognition data. Currently, this sensory data is not aggregated with other face data, so there may be empty and null fields, including the recognized name.

The FaceRecognition WebSocket data is sent only upon a sensory message trigger. It is not sent at timed intervals. The approximate transmission rate of FaceRecognition data is 1x/second, but this timing can vary.

Sample face recognition data:

FaceRecognition{
    "EventName":"FaceRecognition",
    "Message":{
        "Bearing":0,
        "Created":"2018-04-02T16:26:20.1718422Z",
        "Distance":0,
        "Elevation":0,
        "Expiry":"2018-04-02T16:26:20.9218446Z",
        "FaceId":12,
        "PersonName":"Barkley",
        "SensorId":null
    },
    "Type":"FaceRecognition"
}

LocomotionCommand

LocomotionCommand WebSocket data is sent every time the linear or angular velocity of the robot changes. It is not sent at timed intervals.

Sample locomotion data:

LocomotionCommand{
    "EventName":"LocomotionCommand",
    "Message":{
        "ActionId":0,
        "AngularVelocity":0,
        "Created":"2018-04-02T22:59:39.3350238Z",
        "LinearVelocity":0.30000000000000004,
        "UsePid":true,
        "UseTrapezoidalDrive":true,
        "ValueIndex":0
    },
    "Type":"LocomotionCommand"
}

HaltCommand

HaltCommand WebSocket data is sent every time the robot stops and contains the date and time of the event. It is not sent at timed intervals.

SelfState

The SelfState WebSocket provides a variety of data about Misty’s current internal state, including:

  • battery charge, voltage, and charging status
  • IP address
  • affect
  • position and orientation ("pose")
  • SLAM status
  • sensor messages

Note: There are a number of fields in the WebSocket data structure that are reserved for future use and which may be empty or contain null data:

  • Acceleration
  • BumpedState
  • CurrentGoal
  • MentalState
  • Personality
  • PhysiologicalBehavior

SelfState WebSocket messages are sent even if the data has not changed, as the data is sent via timed updates, instead of being triggered by events. The SelfState WebSocket can send data as frequently as every 100ms, though it is set by default to 250ms. To avoid having to handle excess data, you can change the message frequency for the WebSocket with the DebounceMs field, as shown in the lightSocket.js JavaScript helper.

Sample SelfState data:

SelfState {
    "eventName": "SelfState",
    "message": {
        "acceleration": null,
        "batteryChargePercent": 0,
        "batteryVoltage": 0,
        "bumpedState": {
            "disengagedSensorIds": [],
            "disengagedSensorNames": [],
            "disengagedSensors": [],
            "engagedSensorIds": [],
            "engagedSensorNames": [],
            "engagedSensors": []
        },
        "currentGoal": {
            "animation": null,
            "animationId": null,
            "directedMotion": null,
            "directedMotionBehavior": "SupersedeAll",
            "haltActionSequence": false,
            "haltAnimation": false
        },
        "isCharging": false,
        "localIPAddress": "10.0.1.160",
        "location": {
            "bearing": 2.1161846957231862,
            "bearingThreshold": {
                "lowerBound": 0,
                "upperBound": 0
            },
            "distance": 0.049783250606253104,
            "distanceThreshold": {
                "lowerBound": 0,
                "upperBound": 0
            },
            "elevation": -0.009038750542528028,
            "elevationThreshold": {
                "lowerBound": 0,
                "upperBound": 0
            },
            "pose": {
                "bearing": 2.1161846957231862,
                "created": "2018-09-17T21:01:35.7312016Z",
                "distance": 0.049783250606253104,
                "elevation": -0.009038750542528028,
                "frameId": "WorldOrigin",
                "framesProvider": {
                    "rootFrame": {
                        "created": "2018-09-17T18:21:22.8435331Z",
                        "id": "RobotBaseCenter",
                        "isStatic": true,
                        "linkFromParent": {
                            "isStatic": true,
                            "parentFrameId": "",
                            "transformFromParent": {
                                "bearing": 0,
                                "distance": 0,
                                "elevation": 0,
                                "pitch": 0,
                                "quaternion": {
                                    "isIdentity": true,
                                    "w": 1,
                                    "x": 0,
                                    "y": 0,
                                    "z": 0
                                },
                                "roll": 0,
                                "x": 0,
                                "y": 0,
                                "yaw": 0,
                                "z": 0
                            },
                            "transformToParent": {
                                "bearing": 3.141592653589793,
                                "distance": 0,
                                "elevation": 0,
                                "pitch": 0,
                                "quaternion": {
                                    "isIdentity": true,
                                    "w": 1,
                                    "x": 0,
                                    "y": 0,
                                    "z": 0
                                },
                                "roll": 0,
                                "x": 0,
                                "y": 0,
                                "yaw": 0,
                                "z": 0
                            }
                        }
                    }
                },
                "homogeneousCoordinates": {
                    "bearing": 2.1161846957231862,
                    "distance": 0.049783250606253104,
                    "elevation": -0.009038750542528028,
                    "pitch": -0.18708743155002594,
                    "quaternion": {
                        "isIdentity": false,
                        "w": 0.99558717,
                        "x": -0.008987884,
                        "y": -0.09339719,
                        "z": -0.0015491969
                    },
                    "roll": -0.017920719552386073,
                    "x": -0.025824014097452164,
                    "y": 0.04255925118923187,
                    "yaw": -0.001430802591800146,
                    "z": 0.00044997225631959736
                },
                "pitch": -0.18708743155002594,
                "roll": -0.017920719552386073,
                "x": -0.025824014097452164,
                "y": 0.04255925118923187,
                "yaw": -0.001430802591800146,
                "z": 0.00044997225631959736
            },
            "unitOfMeasure": "None"
        },
        "mentalState": {
            "affect": {
                "arousal": 0,
                "dominance": 0,
                "valence": 0
            },
            "created": "2018-09-17T21:01:35.7312016Z",
            "personality": {
                "agreeableness": 0,
                "conscientiousness": 0,
                "extraversion": 0,
                "neuroticism": 0,
                "openness": 0
            },
            "physiologicalBehavior": {
                "hunger": {
                    "isEating": false,
                    "level": 0
                },
                "sleepiness": {
                    "isSleeping": false,
                    "level": 0
                }
            }
        },
        "occupancyGridCell": {
            "x": 0,
            "y": 0
        },
        "occupancyGridCellMeters": 0,
        "orientation": {
            "pitch": -0.18708743155002594,
            "roll": -0.017920719552386073,
            "yaw": -0.001430802591800146
        },
        "position": {
            "x": -0.025824014097452164,
            "y": 0.04255925118923187,
            "z": -0.025824014097452164
        },
        "slamStatus": {
            "runMode": "Exploring",
            "sensorStatus": "Ready",
            "status": 132
        },
        "stringMessages": null,
        "touchedState": {
            "disengagedSensors": [],
            "engagedSensors": []
        }
    },
    "type": "SelfState"
}

WorldState

The WorldState WebSocket sends data about the environment Misty is perceiving, including:

  • the locations of perceived objects
  • the times they were perceived

WorldState WebSocket messages are sent even if the data has not changed, as the data is sent via timed updates, instead of being triggered by events. The WorldState WebSocket can send data as frequently as every 100ms, though it is set by default to 250ms. To avoid having to handle excess data, you can change the message frequency for the WebSocket with the DebounceMs field, as shown in the lightSocket.js JavaScript helper.

Tutorial 1: Changing Misty’s LED

These tutorials describe how to write skills for Misty that use her REST API. With the REST API, we can send commands to Misty from an external device, like the web browser of a laptop or desktop. These tutorials show how to use .html documents and inline JavaScript to write programs for Misty that run in your web browser. In this tutorial, you learn how to write a program that sends a REST command to change the color of Misty’s chest LED.

Connecting Misty to Your Network

Because these commands are sent to Misty over a local network connection, you must connect your robot to your local network. Use the Companion App to connect your robot to your Wi-Fi network, or follow this guide to connect Misty to your Wi-Fi network using the API Explorer and an Ethernet/USB dongle. Once Misty is connected to your network, take note of her IP address to use with the REST API commands.

Setting Up Your Project

This tutorial uses Misty’s REST API to send a POST request that changes the color of her chest LED and logs a successful response. To set up your project, create a new .html document. To simplify the task of making XMLHttpRequests calls to Misty from the browser, we use Axios, an HTTP library supported by most web browsers and Node.js. To use Axios in your program, reference a link to a content delivery network (CDN) for Axios inside <script> tags in the <head> section of your .html file when you set up the project.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Remote Command Tutorial 1</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <!-- Reference a link to a CDN for Axios here -->
    <script src="https://unpkg.com/axios/dist/axios.min.js"></script>
</head>
<body>
</body>
</html>

Alternately, you can download a compressed version of the Axios library to include in your project. Read more about Axios here.

Writing the Code

Within <script> tags in the <body> of your .html document, declare a constant variable ip and set its value to a string with your robot’s IP address. We’ll reference this variable throughout the program to send commands to Misty.

// Declare a constant variable and set its value to a string with your robot's IP address.
const ip = "<robotipaddress>";

When we send a command to change Misty’s LED color, we need to communicate what the new color should be. The REST API command to change Misty’s LED requires three parameters: "red", "green", and "blue". These parameters represent the RGB values of the new color.

Create an object called data to send with the POST request. Create a property for each color parameter, and set the value of each property to an integer between 0 and 255. The RGB values in the example change Misty’s chest LED to hot pink.

// Assemble the data to send with your POST request. Set values for each RGB color property.
let data = {
    "red": 255,
    "green": 0,
    "blue": 255
};

Now we’re ready to write the code to send the command to Misty. We do this by using the axios.post() method included in the Axios library. This method accepts two parameters:

  • the URL of the request, and
  • the data to send with the request.

The REST API endpoint for the ChangeLED command is http://<robotipaddress>/api/led/change. In your code, call axios.post() and pass a string with this endpoint as the first parameter. Use the previously defined variable ip to populate the <robotipaddress> section of the URL. Pass the data object for the second parameter.

// Call axios.post(), passing the URL of the ChangeLED endpoint as the first parameter, and the payload (the data object) as the second.
axios.post("http://" + ip + "/api/led/change", data)

Because Axios is promise based, we need to use a then() method after calling axios.post(). This method returns a promise and triggers a callback function if the promise is fulfilled. We pass a callback function to then() to interpret information from the return values of the POST call and print a message to the console about whether the request was a failure or success.

axios.post("http://" + ip + "/api/led/change", data)
// Use then() after calling axios.post(). Pass a callback function to interpret the return values of the POST call and print a message to the console about whether the request was a failure or a success.
    .then(function (response) {
        console.log(`ChangeLED was a ${response.data[0].status}`);
})

We use a catch() method after then(), which triggers if the promise is rejected. Pass a callback function to catch() to print to the console any errors returned by the request.

axios.post("http://" + ip + "/api/led/change", data)
    .then(function (response) { 
        console.log(`ChangeLED was a ${response.data[0].status}`)
    // Use a catch() method after then(). catch() triggers if the promise is rejected. Pass a callback to catch() to print any errors returned by the request to the console.
    .catch(function (error) {
        console.log(`There was an error with the request ${error}`);
    })
})

Now we’re ready to run the program!

  1. Save your .html document.
  2. Open the .html file in a web browser.
  3. Open the developer tools of your web browser to view the console.

When the page loads, it sends a ChangeLED command to Misty, and a message about the results of the command prints to the console. Congratulations! You have just written your first program using Misty’s REST API!

Full Sample

See the full .html document for reference.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Remote Command Tutorial 1</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <!-- Reference a link to a CDN for Axios here -->
    <script src="https://unpkg.com/axios/dist/axios.min.js"></script>
</head>
<body>
    <script>
        // Declare a constant variable and set its value to a string with your robot's IP address.
        const ip = "<robotipaddress>";

        // Assemble the data to send with your POST request. Set values for each RGB color property.
        let data = {
            "red": 255,
            "green": 0,
            "blue": 0
        };

        // Call axios.post(), passing the URL of the ChangeLED endpoint as the first parameter, and the payload (the data object) as the second.
        axios.post("http://" + ip + "/api/led/change", data)
            // Use then() after calling axios.post(). Pass a callback function to interpret the return values of the POST call and print a message to the console about whether the request was a failure or a success.
            .then(function (response) {
                // log the result
                console.log(`ChangeLED was a ${response.data[0].status}`);
            })
            // Use a catch() method after then(). catch() triggers if the promise is rejected. Pass a callback to catch() to print any errors returned by the request to the console.
            .catch(function (error) {
                // log the error
                console.log(`There was an error with the request ${error}`);
            });
    </script>
</body>
</html>

Tutorial 2: Using Sensors, WebSockets, and Locomotion

In this tutorial, we write a skill that commands Misty to drive in a straight line for a designated period of time and stop if she encounters an object in her path. We do this by combining Misty’s DriveTime locomotion command with information received from the TimeOfFlight and LocomotionCommand WebSocket connections. In this tutorial, you’ll learn:

  • How to subscribe to data from Misty’s WebSocket connections
  • How to use the lightSocket.js helper tool
  • How to write callbacks that use data from WebSocket connections to allow Misty to make decisions about what to do in different situations

Before you write any code, connect Misty to your home network and make sure you know her IP address. You can see how to get this information in the first tutorial above.

Setting Up Your Project

In addition to Axios, this project uses the lightSocket.js helper tool to simplify the process of subscribing to Misty’s WebSocket streams. You can download this tool from our GitHub repository. Save the lightSocket.js file to a “tools” or “assets” folder in your project.

To set up your project, create a new .html document. Give it a title, and include references to lightSocket.js and a content delivery network (CDN) for the Axios library in the <head> section. We write the code for commanding Misty within <script> tags in the <body> section of this document.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Remote Command Tutorial 2</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <!-- Include references to a CDN for the Axios library and the local path where lightSocket.js is saved in the <head> of your document -->
    <script src="https://unpkg.com/axios/dist/axios.min.js"></script>
    <script src="<local-path-to-lightSocket.js"></script>
    </head>
<body>
    <script>
    // The code for commanding Misty goes here!
    </script>
</body>

Writing the Code

Within <script> tags in the <body> of your document, declare a constant variable ip and set its value to a string with your robot’s IP address. We use this variable to send commands to Misty.

// Declare a constant variable and set its value to a string with your robot’s IP address.
const ip = "<robotipaddress>";

Opening a Connection

Create a new instance of LightSocket called socket. The socket instance takes as parameters the IP address of your robot and two optional callback functions. The first callback triggers when a connection is opened, and the second triggers when it’s closed. Pass ip and a function called openCallback() to the new instance of LightSocket. Below these declarations, declare the openCallback() function.

// Create a new instance of LightSocket called socket. Pass as arguments the ip variable and a function named openCallback.
let socket = new LightSocket(ip, openCallback);

/* COMMANDS */

// Define the function passed as the callback to the new instance of LightSocket. This is the code that executes when socket opens a connection to your robot.
function openCallback() {

}

Once a connection is opened, we want to do three things:

  • Subscribe to the TimeOfFlight WebSocket.
  • Subscribe to the LocomotionCommand WebSocket.
  • Send Misty a DriveTime command.

We write the code for this inside the openCallback() function.

Subscribing to WebSockets

Let's start by subscribing to Misty’s TimeOfFlight and LocomotionCommand WebSocket connections.

The TimeOfFlight WebSocket sends data from the time-of-flight (TOF) sensors around Misty’s base. These sensors tell Misty how far objects are away from her, or if she's about to drive off a ledge. For this program, we’re interested in receiving data from Misty’s front center TOF sensor. This sensor points straight forward in Misty’s direction of travel.

The instance of LightSocket we’ve created (called socket) uses the Subscribe() method to subscribe to WebSocket connections. The Subscribe() method takes 8 parameters.

socket.Subscribe(eventName, msgType, debounceMs, property, inequality, value, [returnProperty], [eventCallback])

Note that many of these parameters correlate with the values required in subscribeMsg, described in the documentation here. LightSocket uses the parameters you pass to it to generate a message similar to this.

To subscribe to the data stream from TimeOfFlight, call the Subscribe() method on socket. Pass the following for each parameter:

  1. eventName is a string that designates the name you would like to give this event. Choose a unique name that indicates the function the event serves. Let’s call our event "CenterTimeOfFlight".
  2. msgType is a string that specifies the WebSocket data stream to subscribe to. We’re subscribing to Misty’s "TimeOfFlight" WebSocket.
  3. debounceMs specifies how often in milliseconds Misty should send a message with TimeOfFlight data. Enter 100 to receive a message every tenth of a second. At the speed we command Misty to travel, this should be precise enough for us to be able to execute a Stop command before Misty collides with an object in her path.
  4. The fourth, fifth, and sixth parameters form a comparison statement that specifies event conditions to filter out unwanted messages. The TimeOfFlight WebSocket data stream can send data from all of Misty's TOF sensors, but we only need data from her front center sensor. Pass "SensorPosition" for the property parameter to specify we want data from a specific sensor.
  5. inequality is a string that sets a comparison operater to specify the conditions of events to recieve messages about. In this case we use "==".
  6. value is a string that specifies which value of the property parameter to check against. We want to receive information for TOF sensors where the value of the "SensorPosition" property is "Center".
  7. returnProperty is an optional parameter. We don't need to pass an argument for this parameter for our subscription to TimeOfFlight. Enter null.
  8. The parameter eventCallback is for the callback function that triggers when WebSocket data is received. Name this function _centerTimeOfFlight() to correspond to the name we provided for this event. The Callbacks section of this tutorial describes how to write the code for this function.
function openCallback() {
    // Subscribe to a new event called "CenterTimeOfFlight" that returns data when "TimeOfFlight" events are triggered. Pass arguments to make sure this event returns data for the front center time-of-flight sensor every 100 milliseconds. Pass the callback function _centerTimeOfFlight() as the final argument.
    socket.Subscribe("CenterTimeOfFlight", "TimeOfFlight", 100, "SensorPosition", "==", "Center", null, _centerTimeOfFlight);
}

The LocomotionCommand WebSocket sends data every time the robot’s linear or angular velocity changes (see the documentation here for more information). We use this WebSocket to learn when Misty has stopped moving.

As with TimeOfFlight, we need to pass eight parameters to socket.Subscribe() to receive data from LocomotionCommand. However, because we only want to know whether Misty’s movement has changed, we don’t need to filter our results to specific event properties. We only need to pass arguments for eventName ("LocomotionCommand"), the WebSocket name (also "LocomotionCommand"), and the eventCallback function, which we call _locomotionCommand(). Enter null for all of the other parameters.

function openCallback() {

    socket.Subscribe("CenterTimeOfFlight", "TimeOfFlight", 100, "SensorPosition", "==", "Center", null, _centerTimeOfFlight);

    // Subscribe to a new event called "LocomotionCommand" that returns data when Misty's angular or linear velocity changes. Pass the callback function _locomotionCommand() as the final argument.
    socket.Subscribe("LocomotionCommand", "LocomotionCommand", null, null, null, null, null, _locomotionCommand);

}

Sending Commands

After we’ve subscribed to these WebSockets, we issue the command for Misty to drive by using Axios to send a POST request to the DriveTime endpoint. This endpoint accepts values for three properties: LinearVelocity, AngularVelocity, and TimeMS. Inside the openCallback() function, create a data object with the following key/value pairs to send with the REST command:

  • Set LinearVelocity to 50 to tell Misty to drive forward at a moderate speed.
  • Set AngularVelocity to 0, so Misty drives straight without turning.
  • Set TimeMS to 5000 to specify that Misty should drive for five seconds.
function openCallback() {

    socket.Subscribe("CenterTimeOfFlight", "TimeOfFlight", 100, "SensorPosition", "==", "Center", null, _centerTimeOfFlight);

    socket.Subscribe("LocomotionCommand", "LocomotionCommand", null, null, null, null, null, _locomotionCommand);

    // Assemble the data to send with the DriveTime command.
    let data = {
        LinearVelocity: 50,
        AngularVelocity: 0,
        TimeMS: 5000
    };

}

Note: You can learn more about DriveTime and how the parameters affect Misty’s movement in the API section of this documentation.

Pass the URL for the DriveTime command along with this data object to the axios.post() method. Use a then() method to handle a successful response and catch() to handle any errors.

function openCallback() {

    socket.Subscribe("CenterTimeOfFlight", "TimeOfFlight", 100, "SensorPosition", "==", "Center", null, _centerTimeOfFlight);

    socket.Subscribe("LocomotionCommand", "LocomotionCommand", null, null, null, null, null, _locomotionCommand);

    let data = {
        LinearVelocity: 50,
        AngularVelocity: 0,
        TimeMS: 5000
    };

    // Use axios.post() to send the data to the DriveTime REST API endpoint.
    axios.post("http://" + ip + "/api/drive/time", data)
        // Use .then() to handle a successful response.
        .then(function (response) {
            // Print the results of the DriveTime command to the console.
            console.log(`DriveTime was a ${response.data[0].status}`);
        })
        // Use .catch() to handle errors.
        .catch(function (error) {
            // Print any errors related to the DriveTime command to the console.
            console.log(`There was an error with the request ${error}`);
        });
};

Setting up Callbacks

Now that we’ve written the code to subscribe to the WebSocket connections and send the DriveTime command, we’re ready to write the callback functions _centerTimeOfFlight() and _locomotionCommand(). These functions trigger when Misty sends data for the events we’ve subscribed to.

Start with _centerTimeOfFlight(), the callback function passed to socket.Subscribe() for the TimeOfFlight WebSocket connection. We subscribe to the CenterTimeOfFlight event in order to examine incoming data and tell Misty what to do when she detects an object in her path. Data from this WebSocket is passed directly into the _centerTimeOfFlight() callback function. _centerTimeOfFlight() should parse this data and send Misty a Stop command if an object is detected in her path.

We define our callbacks above the section where we define our commands. Create a function called _centerTimeOfFlight() with a single parameter called data. This parameter represents the data passed to Misty when the CenterTimeOfFlight event triggers.

/* CALLBACKS */

// Define the callback function that is be passed when we subscribe to the CenterTimeOfFlight event.
let _centerTimeOfFlight = function (data) {

};

When you subscribe to an event, some messages come through that don’t contain event data. These are typically registration or error messages. To ignore these messages, place the code for our callback function in try and catch statements.

let _centerTimeOfFlight = function (data) {
    // Use try and catch statements to handle exceptions and unimportant messages from the WebSocket data stream.
    try {

    };
    catch(e) { };
};

Inside the try statement, instantiate a distance variable. distance stores a value representing the distance from Misty in meters an object has been detected by her front center time-of-flight sensor. This value is stored in the data response at data.message.distanceInMeters. Log distance to the console.

let _centerTimeOfFlight = function (data) {
    try {
        // Instantiate a distance variable to store the value representing the distance from Misty in meters an object has been detected by her front center time-of-flight sensor. 
        let distance = data.message.distanceInMeters;
        // Log this distance to the console.
        console.log(distance);
    };
    catch(e) { };
};

We only want Misty to stop when distance is a very small value, indicating she is very close to an object. To do this, write an if statement to check if distance < 0.2. Misty keeps driving if distance is greater than 0.2 meters. If distance is undefined, an exception occurs and is passed to the catch statement. This is the case when registration or error messages are received through the WebSocket. By using a try statement, our callback functions behave appropriately when the “right” messages come through, and continue execution if they cannot act on the data they receive.

If distance is a value less than 0.2, use Axios to issue a POST request to the endpoint for the Stop command: "http://" + ip + "/api/drive/stop". This endpoint does not require parameters, so we can omit the second parameter of axios.post(). Use then() and catch() to log successful responses and catch potential errors.

let _centerTimeOfFlight = function (data) {
    try {
        let distance = data.message.distanceInMeters;
        console.log(distance);
        // Write an if statement to check if the distance is smaller than 0.2 meters.
        if (distance < 0.2) {
            // If the distance is shorter than 
            axios.post("http://" + ip + "/api/drive/stop")
                .then(function (response) {
                    // Print the results of the Stop command to the console.
                    console.log(`Stop was a ${response.data[0].status}`);
                })
                .catch(function (error) {
                    // Print errors related to the Stop command to the console.
                    console.log(`There was an error with the request ${error}`);
                });
            }
    };
    catch(e) { };
};

The _centerTimeOfFlight() callback triggers every time data from Misty’s front center sensor is received. If an object is detected close enough to the sensor, a Stop command is issued, and Misty stops before colliding with the object.

The purpose of the _locomotionCommand() callback function is to “clean up” our skill when the program stops executing. Whenever you subscribe to a WebSocket, you should unsubscribe when you are done with it, so Misty stops sending data. Our program can end in two ways:

  • Misty stops driving when she detects an object in her path.
  • Misty does not detect an object in her path and stops driving after five seconds.

The LocomotionCommand event sends data whenever linear or angular velocity changes, including when Misty starts moving when the program starts. We want to unsubscribe from WebSocket connections when Misty stops and the value of LinearVelocity is 0. Declare a function called _locomotionCommand(), and pass it a parameter for the data received by the LocomotionCommand WebSocket. We only want to unsubscribe when Misty stops, so we add the condition that linearVelocity should be 0 to an if statement. As with the _centerTimeOfFlight() callback, place this condition inside a try statement, and place a catch statement to handle exceptions at the end of the function.

// Define the callback function that is passed when we subscribe to the LocomotionCommand event.
let _locomotionCommand = function (data) {
    // Use try and catch statements to handle exceptions and unimportant messages from the WebSocket data stream.
    try {
        // Use an if statement to check if Misty has stopped moving
        if (data.message.linearVelocity === 0) {
        }
    }
    catch(e) { }
};

If data.message.linearVelocity === 0, the program should unsubscribe from the WebSocket connections we’ve opened. Write commands to unsubscribe from the DriveTime and LocomotionCommand WebSocket connections, and log a message to the console so you can verify that this only happens when linearVelocity is indeed 0.

let _locomotionCommand = function (data) {
    try {
        if (data.message.linearVelocity === 0) {
            // Print a message to the console for debugging.
            console.log("LocomotionCommand received linear velocity as", data.message.linearVelocity);
            // Unsubscribe from the CenterTimeOfFlight and LocomotionCommand events.
            socket.Unsubscribe("CenterTimeOfFlight");
            socket.Unsubscribe("LocomotionCommand");                
        }
    }
    catch(e) { }
};

At the bottom of the script, call socket.Connect(). When the connection is established, the openCallback() function executes to subscribe to WebSocket connections and send Misty a DriveTime command. Data received through WebSocket connections is passed to the _centerTimeOfFlight() and _locomotionCommand() callback functions.

// Open the connection to your robot. When the connection is established, the openCallback function executes to subscribe to WebSockets and send Misty a DriveTime command. Data recieved through these WebSockets is passed to the _centerTimeOfFlight() and _locomotionCommand() callback functions.
socket.Connect();

Congratulations! You’ve just written another skill for Misty. Save your .html document and open it in a web browser to watch Misty go. When the document loads, the program:

  • connects with Misty
  • sends a DriveTime command for Misty to drive forward for 5 seconds
  • subscribes to TimeOfFlight events to detect if an object is in Misty’s path and sends a Stop command if so
  • subscribes to LocomotionCommand to detect when Misty has come to a stop and unsubscribe from the WebSocket connections

Full Sample

See the full .html document for reference.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Remote Command Tutorial 2</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <!-- Include references to a CDN for the Axios library and the local path where lightSocket.js is saved in the <head> of your document -->
    <script src="https://unpkg.com/axios/dist/axios.min.js"></script>
    <script src="<local-path-to-lightSocket.js>"></script>
</head>
<body>
    <script>
        // Declare a constant variable and set its value to a string with your robot’s IP address.
        const ip = "<robotipaddress>";

        // Create a new instance of LightSocket called socket. Pass as arguments the ip variable and a function named openCallback.
        let socket = new LightSocket(ip, openCallback);

        /* CALLBACKS */

        // Define the callback function that is passed when we subscribe to the CenterTimeOfFlight event.
        let _centerTimeOfFlight = function (data) {

            // Use try and catch statements to handle exceptions and unimportant messages from the WebSocket data stream.
            try {
                // Instantiate a distance variable to store the value representing the distance from Misty in meters an object has been detected by her front center time-of-flight sensor. 
                let distance = data.message.distanceInMeters;
                // Log this distance to the console.
                console.log(distance);

                // Write an if statement to check if the distance is smaller than 0.2 meters.
                if (distance < 0.2) {
                    // If the distance is shorter than 
                    axios.post("http://" + ip + "/api/drive/stop")
                        .then(function (response) {
                            // Print the results of the Stop command to the console.
                            console.log(`Stop was a ${response.data[0].status}`);
                        })
                        .catch(function (error) {
                            // Print errors related to the Stop command to the console.
                            console.log(`There was an error with the request ${error}`);
                        });
                }
            }
            catch (e) {
            }
        };

        // Define the callback function that is passed when we subscribe to the LocomotionCommand event.
        let _locomotionCommand = function (data) {
            // Use try and catch statements to handle exceptions and unimportant messages from the WebSocket data stream.
            try {
                // Use an if statement to check if Misty has stopped moving
                if (data.message.linearVelocity === 0) {
                    // Print a message to the console for debugging.
                    console.log("LocomotionCommand received linear velocity as", data.message.linearVelocity);
                    // Unsubscribe from the CenterTimeOfFlight and LocomotionCommand events.
                    socket.Unsubscribe("CenterTimeOfFlight");
                    socket.Unsubscribe("LocomotionCommand");
                }
            }
            catch(e) { }
        };

        /* COMMANDS */

        // Define the function passed as the callback to the new instance of LightSocket. This is the code that executes when socket opens a connection to your robot.
        function openCallback() {

            // Print a message to the console when the connection is established.
            console.log("socket opened");

            // Subscribe to a new event called "CenterTimeOfFlight" that returns data when "TimeOfFlight" events are triggered. Pass arguments to make sure this event returns data for the front center time-of-flight sensor every 100 milliseconds. Pass the callback function _centerTimeOfFlight() as the final argument.
            socket.Subscribe("CenterTimeOfFlight", "TimeOfFlight", 100, "SensorPosition", "==", "Center", null, _centerTimeOfFlight);

            // Subscribe to a new event called "LocomotionCommand" that returns data when Misty's angular or linear velocity changes. Pass the callback function _locomotionCommand() as the final argument.
            socket.Subscribe("LocomotionCommand", "LocomotionCommand", null, null, null, null, null, _locomotionCommand);

            // Assemble the data to send with the DriveTime command.
            let data = {
                LinearVelocity: 50,
                AngularVelocity: 0,
                TimeMS: 5000
            };

            // Use axios.post() to send the data to the DriveTime REST API endpoint.
            axios.post("http://" + ip + "/api/drive/time", data)
                // Chain .then() to handle a successful response.
                .then(function (response) {
                    // Print the results of the DriveTime command to the console.
                    console.log(`DriveTime was a ${response.data[0].status}`);
                })
                // Chain .catch() to handle errors.
                .catch(function (error) {
                    // Print any errors related to the DriveTime command to the console.
                    console.log(`There was an error with the request ${error}`);
                });
        };

        // Open the connection to your robot. When the connection is established, the openCallback function executes to subscribe to WebSockets and send Misty a  DriveTime command. Data recieved through these WebSockets is passed to the _centerTimeOfFlight() and _locomotionCommand() callback functions.
        socket.Connect();
    </script>
</body>
</html>

Tutorial 3: Exploring Computer Vision

This tutorial teaches how to write a skill to have Misty detect, recognize, and learn faces. When this skill runs, Misty checks a given name against her list of known faces. If the name exists, she engages facial recognition to see the user in her field of vision and print a message to the console, greeting the user by name. If the name does not match a known face, Misty uses facial training to learn the user’s face, assigns it the name provided, and prints a greeting to the console. This tutorial teaches

  • how to use REST API commands for facial training and recognition
  • how to subscribe to and use data from Misty’s ComputerVision WebSocket connection

Before you write any code, connect Misty to your home network and make sure you know her IP address. You can see how to get this information in the first tutorial above.

Setting Up Your Project

This project uses the Axios library and the lightSocket.js helper tool to handle requests and simplify the process of subscribing to Misty’s WebSocket connections. You can read more about these tools in the first and second tutorials above.

To set up your project, create a new .html document. Give it a title, and include references to lightSocket.js and a content delivery network (CDN) for the Axios library in the <head> section. Place the code for commanding Misty within <script> tags in the <body> section of this document.

<!DOCTYPE html>
<html>
    <head>
        <meta charset="utf-8" />
        <title>Remote Command Tutorial 3</title>
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <!-- Include references to a CDN for the Axios library and the local path where lightSocket.js is saved in the <head> of your document -->
        <script src="https://unpkg.com/axios/dist/axios.min.js"></script>  
        <script src="<local-path-to-lightSocket.js>"></script>
    </head>
    <body>
        <script>
            // Write the code for the program here!
        </script>
    </body>
</html>

Writing the Code

Within <script> tags in the <body> of your document, declare a constant variable ip and set its value to a string with your robot’s IP address. We use this variable to send commands to Misty.


/* GLOBALS */

// Declare a constant variable and set its value to a string with your robot's IP address.
const ip = "<robotipaddress>"

Create a global constant called you and assign it to a string with your name. Initialize an additional global variable called onList with the value false. We use these variables to check and indicate whether the user (you) is found on Misty’s list of learned faces.


/* GLOBALS */

const ip = "<robotipaddress>"

// Create a global constant called `you` and assign it to a string with your name. Initialize an additional global variable called `onList` with the value `false`. We use these variables to check and indicate whether the user (you) is found on Misty’s list of learned faces.
const you = "<your-name>"
let onList = false;

Note: Avoid hard-coding name values like this in real-world applications of Misty skills. Instead, create a form in the browser where users can type and send their names to Misty.

Opening a Connection

Beneath these global variable declarations, declare a new instance of LightSocket called socket. This instance of LightSocket takes as parameters your robot’s IP address and callback functions that trigger when the connection opens or closes. Pass ip as the first argument, and specify a parameter for the open callback function named openCallback(). Below this declaration, declare the openCallback() function with the prefix async to indicate it is an asynchronous function.


// Create a new instance of LightSocket called socket. Pass as arguments the ip variable and a function named openCallback.
let socket = new LightSocket(ip, openCallback);

/* CALLBACKS */

// Define the function passed as the callback to the new instance of LightSocket. This is the code that executes when socket opens a connection to your robot.
async function openCallback() {

}

A subscription to the ComputerVision WebSocket may already be active if the skill has run multiple times in quick succession, or if the program crashed before reaching completion. To handle this, pass "ComputerVision" to socket.Unsubscribe() at the beginning of the openCallback() function. This unsubscribes from any existing ComputerVision WebSocket connections to avoid issues caused by multiple attempts to subscribe to the same event.


async function openCallback() {
    // Unsubscribe from any existing ComputerVision WebSocket connections.
    socket.Unsubscribe("ComputerVision");
}

Next, the program should pause to give Misty time to register and execute the command. Do this by defining a helper function called sleep(). The sleep() function creates and returns a promise that resolves when setTimeout() expires after a designated number of milliseconds. Declare this function at the top of your script so other parts of the program can access it. Inside the openCallback() function, call the sleep() function and pass in a value of 3000. Prefix sleep() with await to indicate that openCallback() should pause execution of the event loop until the promise has been resolved.


/* TIMEOUT */

// Define a helper function called sleep that can pause code execution for a set period of time.
function sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
}

/* CALLBACKS */

async function openCallback() {
    socket.Unsubscribe("ComputerVision");
    // Use sleep() to pause execution for three seconds to give Misty time to register and execute the command.
    await sleep(3000);
}

Next, check if the name stored in you is included on the list of faces Misty already knows. Inside openCallback(), use Axios to issue a GET request to the endpoint for the GetLearnedFaces command: "http://" + ip + "/api/beta/faces".

async function openCallback() {
    socket.Unsubscribe("ComputerVision");
    await sleep(3000);

    // Issue a GET request to the endpoint for the GetLearnedFaces command. 
    axios.get("http://" + ip + "/api/beta/faces")
}

This request returns a list of the names of faces Misty has already been trained to recognize. We pass a callback function to a then() method to parse the response to the GetLearnedFaces request, and check whether the name stored in you exists in Misty’s list of known faces. Start by storing the list returned by the response in a variable called faceArr. Print faceArr to the console.

async function openCallback() {
    socket.Unsubscribe("ComputerVision");
    await sleep(3000);

    // Use then() to pass the response to a callback function.
    axios.get("http://" + ip + "/api/beta/faces").then(function (res) {
        // Store the list of known faces in the faceArr variable and print the list to the console.
        let faceArr = res.data[0].result;
        console.log("Learned faces:", faceArr);
    });
}

The next step is to loop through the faceArr array and compare the name of each learned face to the value of you. If a match is found, we update the value of the global onList variable to true. Create a for loop to check each item in faceArr against you. Inside this loop, use an if statement to update the value of the onList variable to true if a match is found.

async function openCallback() {
    socket.Unsubscribe("ComputerVision");
    await sleep(3000);

    axios.get("http://" + ip + "/api/beta/faces").then(function (res) {
        let faceArr = res.data[0].result;
        console.log("Learned faces:", faceArr);

        // Loop through each item in faceArr. Compare each item to the value stored in the you variable.
        for (let i = 0; i < faceArr.length; i++) {
            // If a match is found, update the value of onList to true.
            if (faceArr[i] === you) {
                onList = true;
            }
        }
    });
}

At this point the program takes one of two paths. If onList becomes true, Misty should start facial recognition to identify the user in her field of vision and greet them by name. Otherwise, Misty should start facial training, so she can learn the user’s face and recognize them in the future. Set aside a section of the script for /* COMMANDS */ and declare two new functions, startFaceRecognition() and startFaceTraining(), for each of these paths. Use the prefix async when you declare the startFaceTraining() function to indicate the function is asynchronous.

/* COMMANDS */

// Define the function that executes if the value stored in you is on Misty's list of known faces. 
function startFaceRecognition() {

};

// Define the function that executes to learn the user's face if the value stored in you is not on Misty's list of known faces.
async function startFaceTraining() {

};

In either case, we need to subscribe to the ComputerVision WebSocket to receive facial data from Misty. In the openCallback() function, after the for loop has checked through the list of returned faces, call socket.Subscribe(). As described in the second tutorial above, socket.Subscribe() accepts eight parameters. Pass "ComputerVision" for the eventName and msgType parameters. Set debounceMs to 200, and pass a callback function named _ComputerVision() for the callback parameter. There is no need to define event conditions for this data stream; pass null for all other arguments.

async function openCallback() {
    socket.Unsubscribe("ComputerVision");
    await sleep(3000);

    axios.get("http://" + ip + "/api/beta/faces").then(function (res) {
        let faceArr = res.data[0].result;
        console.log("Learned faces:", faceArr);

        for (let i = 0; i < faceArr.length; i++) {
            if (faceArr[i] === you) {
                onList = true;
            }
        }

        // Subscribe to the ComputerVision WebSocket. Pass "ComputerVision" for the eventName and msgType parameters. Set debounceMs to 200, and pass a callback function named _ComputerVision for the callback parameter. There is no need to define event conditions for this data stream; pass null for all other arguments.
        socket.Subscribe("ComputerVision", "ComputerVision", 200, null, null, null, null, _ComputerVision);

    });
}

After subscribing to ComputerVision, write an if...else statement to execute startFaceRecognition() if onList is true, and to execute startFaceTraining() if onList is false. In each condition, print a message to the console to state whether the program found the user on the list.

async function openCallback() {
    socket.Unsubscribe("ComputerVision");
    await sleep(3000);

    axios.get("http://" + ip + "/api/beta/faces").then(function (res) {
        let faceArr = res.data[0].result;
        console.log("Learned faces:", faceArr);

        for (let i = 0; i < faceArr.length; i++) {
            if (faceArr[i] === you) {
                onList = true;
            }
        }

        socket.Subscribe("ComputerVision", "ComputerVision", 200, null, null, null, null, _ComputerVision);

        // Use an if...else statement to execute startFaceRecognition() if onList is true, and to execute startFaceTraining if onList is false.
        if (onList) {
            console.log("You were found on the list!");
            startFaceRecognition();
        } else {
            console.log("You're not on the list...");
            startFaceTraining();
        }

    });
}

Commands

Within the startFaceRecognition() function, print a message to the console that Misty is “starting face recognition”. Then, use Axios to send a POST request to the endpoint for the StartFaceRecognition command: "http://" + ip + "/api/beta/faces/recognition/start". There is no need to send data along with this request, so you can omit the second parameter of axios.post().

This command tells Misty to start the occipital camera so she can match the face in her field of vision with a name on her list of known faces. Because this is a ComputerVision event, the callback for the ComputerVision WebSocket triggers as this data comes in. If the face is recognized, the name of the recognized person is included in the WebSocket data message. Instructions for handling these messages are included in the Callbacks section of this tutorial.

function startFaceRecognition() {
    // Print a message to the console that Misty is “starting face recognition”. Then, use Axios to send a POST request to the endpoint for the StartFaceRecognition command.
    console.log("starting face recognition");   
    axios.post("http://" + ip + "/api/beta/faces/recognition/start");
};

In startFaceTraining(), log a message to the console that Misty is “starting face training”. Then use Axios to send a POST request to the endpoint for the StartFaceTraining command: "http://" + ip + "api/beta/faces/training/start". This command tells Misty to use her occipital camera to learn the user’s face and pair it with a FaceID so she can recognize it in the future. Send a data object along with the request that includes the key FaceId with the value you to attach the name stored in you to the learned face.

async function startFaceTraining() {
    // Print a message to the console that Misty is “starting face training”. Then use Axios to send a POST request to the endpoint for the StartFaceTraining command.
    console.log("starting face training");
    axios.post("http://" + ip + "/api/beta/faces/training/start", { FaceId: you });
};

To give Misty time to learn the user’s face, use the helper function sleep() to pause execution of the program. Below the POST command, call sleep() and pass in the value 20000 for 20 seconds. This gives Misty plenty of time to finish the facial training process. Prefix sleep() with the keyword await.

async function startFaceTraining() {
    console.log("starting face training");
    axios.post("http://" + ip + "/api/beta/faces/training/start", { FaceId: you });
    // Give Misty time to complete the face training process. Call sleep and pass in the value 20000 for 20 seconds. 
    await sleep(20000);
    // Print a message to the console that face training is complete.
    console.log("face training complete");

};

When Misty is done learning the face, we want her to try to recognize it. Below sleep(), log a message to the console that face training is complete. Then, use Axios to send a POST request to the endpoint for the StartFaceRecognition command.

async function startFaceTraining() {
    console.log("starting face training");
    axios.post("http://" + ip + "/api/beta/faces/training/start", { FaceId: you });

    await sleep(20000);
    console.log("face training complete");
    // Use Axios to send a POST request to the endpoint for the StartFaceRecognition command.
    axios.post("http://" + ip + "/api/beta/faces/recognition/start");
};

Callbacks

Data sent through the ComputerVision event subscription is passed to the _ComputerVision() callback function. As discussed in previous tutorials, WebSocket connections sometimes send registration and error messages that do not contain event data. To handle messages unrelated to ComputerVision events, wrap the code for the _ComputerVision() callback inside try and catch statements. As seen in the example, you can print caught errors to the console by passing e to the catch statement, but this is not necessary for the program to execute successfully.

// Define the callback function that is passed when we subscribe to ComputerVision events.
function _ComputerVision(data) { 
    //  Wrap the code for the _ComputerVision callback inside try and catch statements to handle messages unrelated to ComputerVision events. 
    try { 

    }
    // Print caught errors to the console by passing e to the catch statement.
    catch (e) {
        console.log("Error: " + e);
    }
}

The _ComputerVision() callback triggers any time the occipital camera gathers relevant data. Messages come in regardless of whether Misty recognizes a face she detects. The message returned by the ComputerVision WebSocket includes a "personName" property. If a detected face cannot be recognized, the value of "personName" is "unknown person". If a message does not hold any face data, then "personName" doesn’t exist or is undefined. In the _ComputerVision() callback function, use an if statement to check that "personName" does not equal any of these values.

function _ComputerVision(data) {
    try { 
        // Use an if statement to check that personName does not equal "unknown person", null, or undefined. personName is included in the message returned by ComputerVision WebSocket events.
        if (data.message.personName !== "unknown person" && data.message.personName !== null && data.message.personName !== undefined) {

        }
    }
    catch (e) {
        console.log("Error: " + e);
    }
}

Note: This program does not handle the case where the value of you is on the list of known faces, but does not match the face of the person in Misty’s field of vision. This tutorial is designed to introduce the basics of face commands and ComputerVision events, and does not address how to handle issues such as the above. This kind of edge case could be handled in a number of ways. For example, you could have Misty print a message that the face does not match the value stored in you, and then command her to learn the new face and assign it a numeric value for FaceID. Alternately, you could have Misty start face training and include a form in your .html document to allow the user to pass a new value for FaceID. The decision is yours!

If a face is recognized, the value of the "personName" property is the name of the recognized person. In our case, this should also be the string stored in you. Inside the if statement, write code to print a message to greet the recognized face, unsubscribe from "ComputerVision", and issue a POST request to the endpoint for the command to StopFacialRecognition: "http://" + ip + "/api/beta/faces/recognition/stop".

function _ComputerVision(data) {
    try {
        if (data.message.personName !== "unknown person" && data.message.personName !== null && data.message.personName !== undefined) {
            // If the face is recognized, print a message to greet the person by name.
            console.log(`A face was recognized. Hello there ${data.message.personName}!`);

            // Unsubscribe from the ComputerVision WebSocket.
            socket.Unsubscribe("ComputerVision");
            // Use Axios to issue a POST command to the endpoint for the StopFaceRecognition command.
            axios.post("http://" + ip + "/api/beta/faces/recognition/stop");
        }
    }
    catch (e) {
        console.log("Error: " + e);
    }
}

At the bottom of the script, call socket.Connect(). When the connection is established, the openCallback() function executes and the process begins.

// Open the connection to your robot. When the connection is established, the openCallback function executes to check whether the value stored in you is on Misty's list of known faces. Then, the program subscribes to the ComputerVision WebSocket, and Misty either greets you by name or starts facial training to learn your face so she can greet you in the future.
socket.Connect();

Congratulations! You have written another remote skill for Misty. When the document loads, the program:

  • connects with Misty
  • sends a GetLearnedFaces command and checks whether your name is on the list of faces Misty already knows
  • subscribes to the ComputerVision WebSocket to receive messages when Misty is commanded to StartFaceRecognition
  • recognizes and greets you if you are on the list of known faces, or sends a StartFaceTraining command to learn your face if you are not

Full Sample

See the full .html document for reference.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Remote Command Tutorial 3</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <!-- Include references to a CDN for the Axios library and the local path where lightSocket.js is saved in the <head> of your document -->
    <script src="https://unpkg.com/axios/dist/axios.min.js"></script>
    <script src="<local-path-to-lightSocket.js>"></script>
</head>
<body>
    <script>
        /* GLOBALS */

        // Declare a constant variable and set its value to a string with your robot's IP address.
        const ip = "<robotipaddress>"
        // Create a global constant called `you` and assign it to a string with your name. Initialize an additional global variable called `onList` with the value `false`. We use these variables to check and indicate whether the user (you) is found on Misty’s list of learned faces.
        const you = "<your-name>"
        let onList = false;

        // Create a new instance of LightSocket called socket. Pass as arguments the ip variable and a function named openCallback.
        let socket = new LightSocket(ip, openCallback);

        /* TIMEOUT */
        // Define a helper function called sleep that can pause code execution for a set period of time.
        function sleep(ms) {
            return new Promise(resolve => setTimeout(resolve, ms));
        }

        /* CALLBACKS */

        // Define the function passed as the callback to the new instance of LightSocket. This is the code that executes when socket opens a connection to your robot.
        async function openCallback() {
            // Unsubscribe from any existing ComputerVision WebSocket connections.
            socket.Unsubscribe("ComputerVision");
            // Pause execution for three seconds to give Misty time to register and execute the command.
            await sleep(3000);

            // Issue a GET request to the endpoint for the GetLearnedFaces command. Use then() to pass the response to a callback function.
            axios.get("http://" + ip + "/api/beta/faces").then(function (res) {
                // Store the list of known faces in the faceArr variable and print the list to the console.
                let faceArr = res.data[0].result;
                console.log("Learned faces:", faceArr);

                // Loop through each item in faceArr. Compare each item to the value stored in the you variable.
                for (let i = 0; i < faceArr.length; i++) {
                    // If a match is found, update the value of onList to true.
                    if (faceArr[i] === you) {
                        onList = true;
                    }
                }

                // Subscribe to the ComputerVision WebSocket. Pass "ComputerVision" for the eventName and msgType parameters. Set debounceMs to 200, and pass a callback function named _ComputerVision for the callback parameter. There is no need to define event conditions for this data stream; pass null for all other arguments.
                socket.Subscribe("ComputerVision", "ComputerVision", 200, null, null, null, null, _ComputerVision);

                // Use an if...else statement to execute startFaceRecognition() if onList is true, and to execute startFaceTraining if onList is false.
                if (onList) {
                    console.log("You were found on the list!");
                    startFaceRecognition();
                } else {
                    console.log("You're not on the list...");
                    startFaceTraining();
                }
            });
        };

        // Define the callback function that is passed when we subscribe to ComputerVision events.
        function _ComputerVision(data) {
            //  Wrap the code for the _ComputerVision callback inside try and catch statements to handle messages unrelated to ComputerVision events. 
            try {
                // Use an if statement to check that personName does not equal "unknown person", null, or undefined. personName is included in the message returned by ComputerVision WebSocket events.
                if (data.message.personName !== "unknown person" && data.message.personName !== null && data.message.personName !== undefined) {
                    // If the face is recognized, print a message to greet the person by name.
                    console.log(`A face was recognized. Hello there ${data.message.personName}!`);

                    // Unsubscribe from the ComputerVision WebSocket.
                    socket.Unsubscribe("ComputerVision");
                    // Use Axios to issue a POST command to the endpoint for the StopFaceRecognition command.
                    axios.post("http://" + ip + "/api/beta/faces/recognition/stop");
                }
            }
            // Print caught errors to the console by passing e to the catch statement.
            catch (e) {
                console.log("Error: " + e);
            }
        };

        /* COMMANDS */

        // Define the function that executes if the value stored in you is on Misty's list of known faces. 
        function startFaceRecognition() {
            // Print a message to the console that Misty is “starting face recognition". Then, use Axios to send a POST request to the endpoint for the StartFaceRecognition command.
            console.log("starting face recognition");
            axios.post("http://" + ip + "/api/beta/faces/recognition/start");
        };

        // Define the function that executes to learn the user's face if the value stored in you is not on Misty's list of known faces.
        async function startFaceTraining() {
            // Print a message to the console that Misty is “starting face training”. Then use Axios to send a POST request to the endpoint for the StartFaceTraining command.
            console.log("starting face training");
            axios.post("http://" + ip + "/api/beta/faces/training/start", { FaceId: you });

            // Give Misty time to complete the face training process. Call sleep and pass in the value 20000 for 20 seconds. 
            await sleep(20000);
            // Print a message to the console that face training is complete. Then, use Axios to send a POST request to the endpoint for the StartFaceRecognition command.
            console.log("face training complete");
            axios.post("http://" + ip + "/api/beta/faces/recognition/start");
        };

        // Open the connection to your robot. When the connection is established, the openCallback function executes to check whether the value stored in you is on Misty's list of known faces. Then, the program subscribes to the ComputerVision WebSocket, and Misty either greets you by name or starts facial training to learn your face so she can greet you in the future.

        socket.Connect();

    </script>
</body>
</html>

Tutorial 4: Introduction to Mapping

This tutorial describes how to use Misty’s simultaneous localization and mapping (SLAM) system to obtain data about your robot’s location and draw a map of her surroundings. When this skill runs, Misty enables the mapping capabilities of her Occipital Structure Core depth sensor and creates a map as you use the API Explorer to drive her around her environment. When she finishes driving, Misty draws a map of the location she explored. This tutorial teaches

  • how to use mapping REST API commands
  • how to subscribe to the data stream from the SelfState WebSocket connection
  • how to transform raw map data into a graphical map of Misty’s environment

Note that many real-world applications of Misty’s mapping capabilities require her to create a map while independently exploring her environment. Programs like this can be very complex as they require mapping commands to run alongside code telling Misty where to drive and how to avoid obstacles. For simplicity, this project requires you to use the API Explorer to move Misty instead of programming an automated exploration process.

Setting Up Your Project

This project uses the Axios library and the lightSocket.js helper tool to handle requests and simplify the process of subscribing to Misty’s WebSocket connections. You can download this tool from our GitHub repository. Save the lightSocket.js file to a “tools” or “assets” folder in your project.

To set up your project, create a new HTML document. Give it a title, and include references to lightSocket.js and a CDN for the Axios library in the <head> section. We write the code for commanding Misty within <script> tags in the <body> section of this document.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Remote Command Tutorial 4</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <!-- Include references to a CDN for the Axios library and the local path where lightSocket.js is saved in the <head> of your document -->
    <script src="https://unpkg.com/axios/dist/axios.min.js"></script>
    <script src="<local-path-to-lightSocket.js>"></script>
</head>
<body>
    <script>
    // The code for commanding Misty goes here!
    </script>
</body>

Writing the Code

Within <script> tags in the <body> of your document, declare a constant global variable ip and set its value to a string with your robot’s IP address. We use this variable to send commands to Misty. Other global variables are declared later in the project.

// Declare a global variable ip and set its value to a string with your robot's IP address.
const ip = "<robotipaddress>";

Create a new instance of LightSocket called socket. This instance of LightSocket takes as parameters the IP address of your robot and two optional callback functions (the first triggers when a connection is opened, and the second triggers when it’s closed). Pass the ip variable and a function called openCallback() to socket as the first and second parameters. Below these declarations, declare the openCallback() function.

// Create a new instance of LightSocket called socket. Pass as arguments the ip variable and a function named openCallback.
let socket = new LightSocket(ip, openCallback);

/* CALLBACKS */

// Define the function passed as the callback to the new instance of LightSocket. This is the code that executes when socket opens a connection to your robot.
function openCallback() {

}

Next, subscribe to the SelfState WebSocket data stream. SelfState provides data about Misty’s current internal state at regular intervals. This tutorial uses data related to the "slamStatus" property, which indicates the status of Misty’s SLAM sensor. Mapping commands only work if Misty’s SLAM system is ready to receive them, and we use the value of "slamStatus" to send Misty the right commands at the right times.

Create a function called subscribeSelfState(), and within that function call socket.Subscribe(). The socket.Subscribe() method takes eight arguments. For more information about what each of these arguments does, see the documentation on using the lightSocket.js tool here.

socket.Subscribe(eventName, msgType, debounceMs, property, inequality, value, [returnProperty], [eventCallback])

Pass "SlamStatus" for the eventName argument and "SelfState" for msgType. Pass 5000 for debounceMS to tell Misty to send a SelfState message every 5 seconds. Pass null for the property, inequality, and value arguments. For the returnProperty argument, enter the string "slamStatus" to trim the message to include only the desired SLAM status data. For eventCallback, pass _SelfState as the name of the callback function to run when you receive data from this subscription.

/* WEBSOCKET SUBSCRIPTION FUNCTIONS */

// Create a function called subscribeSelfState() to subscribe to SelfState events.
function subscribeSelfState() {
    // Call socket.Subscribe(). Pass "SlamStatus" for the eventName argument and "SelfState" for msgType. Pass 5000 for debounceMS to tell Misty to send a SelfState message every 5 seconds. Pass null for the property, inequality, and value arguments. For the returnProperty argument, enter the string "slamStatus" to trim the message to include only the desired SLAM status data. For eventCallback, pass _SelfState as the name of the callback function to run when you receive data from this subscription.
    socket.Subscribe("SlamStatus", "SelfState", 5000, null, null, null, "slamStatus", _SelfState);
}

Call this subscribeSelfState() function inside openCallback() to subscribe to this event only after you establish a connection to Misty.

/* CALLBACKS */

function openCallback() {
    // Call subscribeSelfState() to subscribe to SelfState events only after you establish a connection to Misty.
    subscribeSelfState();
}

Define another global variable to keep track of whether we are subscribed to the event. Call it subscribed and initialize it as false.

/* GLOBALS */

const ip = "<robotipaddress>";
// Define a global variable to keep track of whether we are subscribed to the event.
let subscribed = false;
let socket = new LightSocket(ip, openCallback);

Now we can write the code to start mapping. Define an asynchronous function called startMapping() and call this function after subscribeSelfState() in openCallback().

function openCallback() {
   subscribeSelfState();
   // Call the startMapping() function to start mapping after the event subscription is established.
   startMapping();
}

/* COMMANDS */

// Declare a function that sends the command to start mapping.
function startMapping() {
}

The startMapping() function sends a request to the REST endpoint for the command to start mapping. However, this request should not be sent until after we have subscribed to the SelfState event. We want our program to pause so the event subscription has time to become established. To accomplish this, create a helper function called sleep(). The sleep() function creates and returns a promise that resolves when setTimeout() expires after a designated number of milliseconds. Define sleep() at the top of the script so it can be referenced throughout the program.

/*TIMEOUT */

// Define a helper function called sleep that can pause code execution for a set period of time.
function sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
}

Within startMapping(), use a while loop to run sleep() (for 500 milliseconds) repeatedly for as long as subscribed is set to false.

async function startMapping() {
    // Use a while loop to run sleep() (for 500 milliseconds) repeatedly for as long as subscribed is set to false.
    while (!subscribed) {
        await sleep(500);
   }
}

The callback for our SelfState event sends an initial registration message once the event is subscribed to. When this happens, we want to update subscribed to true to break the while loop in startMapping() and continue execution. Define the _SelfState() callback function beneath openCallback() and use an if statement to check if subscribed is false. If it is, set it to true.

/* CALLBACKS */

function openCallback() {
    subscribeSelfState();
    startMapping();
}

// Define the callback function that handles data sent through the event subscription. 
function _SelfState(data) {
    // Update subscribed to true.   
    if (!subscribed) {
        subscribed = true;
   }
}

The code within startMapping() continues to execute once the first message is received, _SelfState() is triggered, subscribed is updated to true, and our event is registered. After the while loop in startMapping(), use axios.post() to send a POST request to the endpoint for the SlamStartMapping command. SlamStartMapping tells Misty to establish her current orientation and position and engages her depth sensor to obtain map data. We refer to Misty’s orientation and position on a map as pose.

async function startMapping() {
    while (!subscribed) {
        await sleep(500);
    }
    // Use `axios.post()` to send a POST request to the endpoint for the `SlamStartMapping` command.
    axios.post("http://" + ip + "/api/alpha/slam/map/start");
}

Mapping requires commands to be issued at the right time and in the right order. As we control the flow of our program, it’s a good idea to include a series of sequential numerical logs to indicate that everything is happening in the right sequence. Within _SelfState(), print a message to the console to indicate that a subscription is established and that Misty is obtaining pose.

function _SelfState(data) {
    if (!subscribed) {
        subscribed = true;
    }
    // Print a message to the console to indicate that a subscription is established and that Misty is obtaining pose
    console.log("1 - Subscribed to SelfState, getting pose");
}

The "runMode" property within "slamStatus" holds a string value that provides the current status of Misty’s SLAM system. This status indicates when Misty is ready to start collecting data. Use the sleep() function and a global variable to keep track of whether Misty is ready to start mapping. Define the global variable ready near the top of the program and initialize it as false. We also want to track when Misty is in the process of mapping. Declare a variable mapping and set it to false as well.

/* GLOBALS */

const ip = "<robotipaddress>";
let subscribed = false;

// Use global variables to keep track of whether Misty is ready to start mapping. Define the global variable ready near the top of the program and initialize it as false.
let ready = false;
//  We also want to track when Misty is in the process of mapping. Declare a variable mapping and set it to false as well.
let mapping = false;

let socket = new LightSocket(ip, openCallback);

Back in startMapping(), set mapping to true just before making the POST request. After the request, call the next function in the process, getMap(), and define this function below startMapping(). The entire startMapping() function is shown here for reference.

async function startMapping() {
   while (!subscribed) {
    await sleep(500);
   }
   //Set mapping to true.
   mapping = true;
   axios.post("http://" + ip + "/api/alpha/slam/map/start");
   // Call getMap() to gather and return mapping data. 
   getMap();
}

// Define getMap() as an asynchronous function. getMap() will gather map data as Misty drives around her enviornment and return it to your program when she is done mapping.
async function getMap() {
}

Within getMap(), start by creating another while loop that runs sleep() repeatedly for as long as ready is set to false. It takes a few seconds for Misty to obtain pose, and we want execution to pause until this process is complete.

async function getMap() {

    // Create a while loop that runs sleep() repeatedly for as long as ready is set to false. This pauses execution until Misty has obtained pose.
    while (!ready) {
        await sleep(500);
   }
}

The messages coming in from our SelfState event contain a status message indicating whether the sensor is ready to collect data. The callback _SelfState() triggers every 5 seconds with new information from this event.

Information received through a WebSocket connection can include registration messages that we need to filter out when evaluating the data. Below the if statement in _SelfState(), write another if statement to check that a received message is indeed the requested property data. The value of data.message is an object (not a string) when we are receiving data. Ensure a message is relevant data by executing the code in the if statement on the condition that the value of data.message is not a string.

function _SelfState(data) {
    if (!subscribed) {
        subscribed = true;
    }
    console.log("1 - Subscribed to SelfState, getting pose");
    // The value of data.message will be an object if is relevant to our slamStatus event. Ensure a message is relevant data by executing the code in this if statment only under the condition that the value of data.message is not a string
    if (typeof data.message != "string") {

    }
}

The status of the SLAM system is contained within data.message. Declare a variable runMode to hold the current status of the SLAM system. Print a message to the console with the value of this variable to see the current status of the SLAM system as the program runs.

function _SelfState(data) {
    if (!subscribed) {
        subscribed = true;
    }
    console.log("1 - Subscribed to SelfState, getting pose");
    if (typeof data.message != "string") {

        // The status of the SLAM system is contained within data.message. Declare a variable runMode to hold the current status of the SLAM system.
        let runMode = data.message.runMode;
        // Print a message to the console with the value of this variable to see the current status of the SLAM system as the program runs.
        console.log("runMode: " + runMode);
    }
}

We want to update certain variables we defined earlier depending on the status of the SLAM sensor. Write a switch statement that checks the value of runMode. If it is equal to the string "Ready", break from the statement and do nothing ("Ready" is the initial state of the sensor). If it is equal to "Exploring", pose is obtained and Misty is ready to start driving around to collect map data. In this case we want to update ready to true to break the while loop within getMap() and continue execution in that function. The full switch statement includes more code, but you can see its current state here:

function _SelfState(data) {
    if (!subscribed) {
        subscribed = true;
    }
    console.log("1 - Subscribed to SelfState, getting pose");
    if (typeof data.message != "string") {

        let runMode = data.message.runMode;
        console.log("runMode: " + runMode);
        // Write a switch statement that checks the value of runMode. If it is equal to the string "Ready", break from the statement and do nothing ("Ready" is the initial state of the sensor). If it is equal to "Exploring", pose is obtained and Misty is ready to start driving around to collect map data. In this case we want to update ready to true to break the while loop within getMap() and continue execution in that function. 
        switch (runMode) {
            case "Ready":
                break
            case "Exploring":
                ready = true;
                break
        }
    }
}

After the while loop within getMap(), log a second message to the console to indicate that pose is obtained and Misty is ready to start collecting map data.

async function getMap() {
    while (!ready) {
        await sleep(500);
    }
    // Log a message to the console to indicate that pose is obtained and Misty is ready to start collecting map data.
    console.log("2 - Pose obtained, starting mapping");
}

The next step is to use an alert to pause execution of the program and give Misty time to drive around collecting data. Execution of the program only continues once the user clicks OK. You can use the API Explorer or the Misty Companion App to drive Misty around. Be sure to drive slowly and thoroughly cover the room Misty is mapping. As Misty drives, the Occipital Structure Core depth sensor measures her distance from the objects she detects and localizes them relative to her current orientation and location.

async function getMap() {
    while (!ready) {
        await sleep(500);
    }
    console.log("2 - Pose obtained, starting mapping");
    // Use an alert to pause execution of the program and give Misty time to drive around collecting data. Execution of the program only continues once the user clicks OK.
    alert("Head over to the API explorer and drive Misty around the room to gather map data. Once finished, hit ok to proceed.");
}

Click OK after driving Misty around. At this point, Misty should have enough data to draw a map of her surroundings. Below the alert in getMap(), use axios.post() to send a POST request to the endpoint for the SlamStopMapping command.

async function getMap() {
    while (!ready) {
        await sleep(500);
    }
    console.log("2 - Pose obtained, starting mapping");
    alert("Head over to the API explorer and drive Misty around the room to gather map data. Once finished, hit ok to proceed.");
    // Use axios.post() to send a POST request to the endpoint for the SlamStopMapping command.
    axios.post("http://" + ip + "/api/alpha/slam/map/stop");
}

Once again, we need to pause execution while Misty stops mapping. When the process is complete, we can obtain the map data. Below the POST request, write another while loop to pause execution while mapping is true.

async function getMap() {
    while (!ready) {
        await sleep(500);
    }
    console.log("2 - Pose obtained, starting mapping");
    alert("Head over to the API explorer and drive Misty around the room to gather map data. Once finished, hit ok to proceed.");
    axios.post("http://" + ip + "/api/alpha/slam/map/stop");
    // Write another while loop to pause execution while mapping is true.
    while (mapping) {
        await sleep(500); 
    }
}

In the switch statement of the _SelfState() callback function, add one more case. If runMode is equal to the string "Paused", update mapping to false. This will occur a few seconds after we issue the SlamStopMapping command.

function _SelfState(data) {
    if (!subscribed) {
        subscribed = true;
    }
    console.log("1 - Subscribed to SelfState, getting pose");
    if (typeof data.message != "string") {

        let runMode = data.message.runMode;
        console.log("runMode: " + runMode);

        switch (runMode) {
            case "Ready":
                break
            case "Exploring":
                ready = true;
                break
            // If runMode is equal to the string "Paused", update mapping to false. This will occur a few seconds after we issue the SlamStopMapping command.
            case "Paused":
                mapping = false;
                break
        }
    }
}

Wrap the second if statement of _SelfState() inside a try, catch statement to handle any unforeseen exceptions.

function _SelfState(data) {
    if (!subscribed) {
        subscribed = true;
    }
    console.log("1 - Subscribed to SelfState, getting pose");
    // Wrap the second if statement of _SelfState() inside a try, catch statement to handle any unforeseen exceptions.
    try {
        if (typeof data.message != "string") {

            let runMode = data.message.runMode;
            console.log("runMode: " + runMode);

            switch (runMode) {
                case "Ready":
                    break
                case "Exploring":
                    ready = true;
                    break
                case "Paused":
                    mapping = false;
                    break
            }
        }
    }
     catch (e) {
    }
}

To review: within getMap(), after Misty gathers map data and sends the command to stop mapping, we use a while loop to pause execution until Misty’s SLAM sensor status is "Paused" (indicating mapping has stopped). This status is tracked by the value of mapping and updated within the _SelfState() function. When mapping has stopped, the execution of getMap() continues.

At this point, print another message to the console indicating the mapping process has stopped and the map is being obtained.

async function getMap() {
    while (!ready) {
        await sleep(500);
    }
    console.log("2 - Pose obtained, starting mapping");
    alert("Head over to the API explorer and drive Misty around the room to gather map data. Once finished, hit ok to proceed.");
    axios.post("http://" + ip + "/api/alpha/slam/map/stop");
    while (mapping) {
        await sleep(500); 
    }
    // Print a message to the console indicating the mapping process has stopped and the map is being obtained.
    console.log("3 - Mapping has stopped, obtaining map");
}

Note: If the program is running properly, these log messages should appear in order. If they don’t (if you see message 3 before message 2), then something isn’t right and you need to attempt to debug the issue.

In order to get the raw map data Misty just collected, use axios.get() to send a GET request to the endpoint for the SlamGetRawMap command. Use then() to call two new functions, unsubscribeSelfState() and processMap(). We use these commands to respectively unsubscribe from the event and generate a graphical map from the map data. Log any errors to the console within a catch() statement.

async function getMap() {
    while (!ready) {
        await sleep(500);
    }
    console.log("2 - Pose obtained, starting mapping");
    alert("Head over to the API explorer and drive Misty around the room to gather map data. Once finished, hit ok to proceed.");
    axios.post("http://" + ip + "/api/alpha/slam/map/stop");

    while (mapping) {
        await sleep(500); 
    }
    console.log("3 - Mapping has stopped, obtaining map");

    // Use axios.get() to send a GET request to the endpoint for the SlamGetRawMap command. Use then() to call two new functions, unsubscribeSelfState() and processMap(). We use these commands to respectively unsubscribe from the event and generate a graphical map from the map data. Log any errors to the console within a catch() statement.
    axios.get("http://" + ip + "/api/alpha/slam/map/raw")
        .then((data) => {
            unsubscribeSelfState();
            processMap(data);
        })
        .catch((err) => {
            console.log(err);
        })
}

Define unsubscribeSelfState() near subscribeSelfState toward the top of your program. Call socket.Unsubscribe() within the function, passing the string "SlamStatus" (the name given the SelfState event).

/* WEBSOCKET SUBSCRIPTION FUNCTIONS */

function subscribeSelfState() {
    socket.Subscribe("SlamStatus", "SelfState", 5000, null, null, null, "slamStatus", _SelfState);
}

// Define unsubscribeSelfState() to unsubscribe from the SelfState event. Call socket.Unsubscribe() within the function, passing the string "SlamStatus" (the name given the SelfState event).
function unsubscribeSelfState() {
    socket.Unsubscribe("SlamStatus");
}

The processMap() function is called to isolate the map data after we receive a response from the SlamGetRawMap command. Declare a function processMap(). This function starts by printing another log message indicating we have received the map data. Define a variable, data to store the map data within the response.

// Define the processMap() function to isolate the map data.
function processMap(res) {
    // Print a log message indicating we have recieved the map data.
    console.log("4 - Recieved map, processing map data");
    // Define a variable to store the map data sent with the response.
    let data = res.data;
}

The API Explorer uses a function called drawMap() to generate a graphical map from raw map data. This tutorial borrows the drawMap() function from the API Explorer code. Pass data into drawMap() to draw the map in the browser.

function processMap(res) {
    console.log("4 - Recieved map, processing map data");
    let data = res.data;
    // Pass data into drawMap() to draw the map in the browser.
    drawMap(data)
}

The data returned by SlamGetRawMap includes a two-dimensional matrix with values representing individual cells of space on the map. Each cell in the matrix has a value of 0, 1, 2, or 3 -- 0 indicates "unknown" space, 1 indicates "open" space, 2 indicates "occupied" space, and 3 indicates "covered" space. The drawMap() function iterates over each value in the matrix to generate a two-dimensional graphical representation of the map.

Insert the helper function drawMap() at the end of the program.

// Use this function from the API Explorer source code to create a graphic image of the map Misty generates.
function drawMap(data) {
    var canvas = document.getElementById("mapCanvas");
    var context = canvas.getContext("2d");
    canvas.width = (data[0].result.width - 1) * pixelsPerGrid;
    canvas.height = (data[0].result.height - 1) * pixelsPerGrid;
    context.scale(pixelsPerGrid, pixelsPerGrid);
    data[0].result.grid.reverse().forEach(function (item) { item.reverse(); });
    for (var currentX = data[0].result.height - 1; currentX >= 0; currentX--) {
        for (var currentY = data[0].result.width - 1; currentY >= 0; currentY--) {
            context.beginPath();
            context.lineWidth = 1;
            switch (data[0].result.grid[currentX][currentY]) {
                case 0:
                    // "Unknown"
                    context.fillStyle = 'rgba(133, 133, 133, 1.0)'; // '#858585';
                    break;
                case 1:
                    // "Open"
                    context.fillStyle = 'rgba(255, 255, 255, 1.0)'; // '#FFFFFF';
                    break;
                case 2:
                    // "Occupied"
                    context.fillStyle = 'rgba(42, 42, 42, 1.0)'; // '#2A2A2A';
                    break;
                case 3:
                    // "Covered"
                    context.fillStyle = 'rgba(102, 0, 237, 1.0)'; // 'rgba(33, 27, 45, 0.5)'; // '#6600ED';
                    break;
                default:
                    context.fillStyle = '#ff9b9b';
                    break;
            }
            context.rect(currentY - 1 * pixelsPerGrid, currentX - 1 * pixelsPerGrid, pixelsPerGrid, pixelsPerGrid);
            context.fill();
        }
    }
    alert("Skill finished! Successfully obtained and drew a map!");
}

Declare a global variable pixelsPerGrid and set its value to 10. This variable is used in the drawMap() function to determine the size (in pixels) of each cell on the map. Adjust this value to change the size of the map.

/* GLOBALS */

const ip = "<robotipaddress>";

// Declare a global variable pixelsPerGrid and set its value to 10. This variable is used in the drawMap() function to determine the size (in pixels) of the map. Adjust this value to change the size of the map.
const pixelsPerGrid = 10;
let subscribed = false;
let ready = false;
let mapping = false;
let socket = new LightSocket(ip, openCallback);

In the <body> of your project, create an HTML <canvas> element to hold the map. Set the id attribute of this element to the string "mapCanvas". The the drawMap() function references this id to create the map in this element.

<body>
    <!-- Create a canvas element to hold the graphic map -->
    <canvas id="mapCanvas" class="col-md-9 col-sm-12 mr-2"></canvas>
    <script>
        // Code to command Misty
    </script>
</body>

At the bottom of the script, call socket.Connect().When the connection is established, the OpenCallback() function executes and the process begins.

// Open the connection to your robot.
socket.connect()

Congratulations! You’ve written a mapping program for Misty. When the document loads, the program:

  • establishes a connection to your robot
  • subscribes to the data stream from the SelfState WebSocket connection
  • initiates the SLAM system to enable mapping
  • prompts the user to use the API Explorer to explore an area
  • generates a graphical representation of the map Misty generates

Full Sample

See the full .html document for reference.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Remote Command Tutorial 4</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <!-- Include references to a CDN for the Axios library and the local path where lightSocket.js is saved in the <head> of your document -->
    <script src="https://unpkg.com/axios/dist/axios.min.js"></script>
    <script src="<local-path-to-lightSocket.js>"></script>
</head>
<body>
    <!--Create a canvas element to hold the graphic map-->
    <canvas id="mapCanvas" class="col-md-9 col-sm-12 mr-2"></canvas>
    <script>
        /* GLOBALS */

        // Declare a global variable ip and set its value to a string with your robot's IP address.
        const ip = "<robotipaddress>";
        // Declare a global variable pixelsPerGrid and set its value to 10. This variable is used in the drawMap() function to determine the size (in pixels) of the map. Adjust this value to change the size of the map.
        const pixelsPerGrid = 10;
        // Define a global variable to keep track of whether we are subscribed to the event.
        let subscribed = false;
        // Use global variables to keep track of whether Misty is ready to start mapping. Define the global variable ready near the top of the program and initialize it as false.
        let ready = false;
        //  We also want to track when Misty is in the process of mapping. Declare a variable mapping and set it to false as well.
        let mapping = false;
        // Create a new instance of LightSocket called socket. Pass as arguments the ip variable and a function named openCallback.
        let socket = new LightSocket(ip, openCallback);

        /*TIMEOUT */

        // Define a helper function called sleep that can pause code execution for a set period of time.
        function sleep(ms) {
            return new Promise(resolve => setTimeout(resolve, ms));
        }

        /* CALLBACKS */

        // Define the function passed as the callback to the new instance of LightSocket. This is the code that executes when socket opens a connection to your robot.
        function openCallback() {
            // Call subscribeSelfState() to subscribe to SelfState events only after you establish a connection to Misty.
            subscribeSelfState();
            // Call the startMapping() function to start mapping after the event subscription is established.
            startMapping();
        }
        // Define the callback function that handles data sent through the event subscription. 
        function _SelfState(data) {
            // Update subscribed to true.   
            if (!subscribed) {
                subscribed = true;
                // Print a message to the console to indicate that a subscription is established and that Misty is obtaining pose
                console.log("1 - Subscribed to SelfState, getting pose");
            }
            // Update global variables depending on the SLAM sensor's status. Wrap this next block inside a try, catch statement to handle any unforeseen exceptions.
            try {
                // The value of data.message will be an object if is relevant to our slamStatus event. Ensure a message is relevant data by executing the code in this if statment only under the condition that the value of data.message is not a string
                if (typeof data.message != "string") {

                    // The status of the SLAM system is contained within data.message. Declare a variable runMode to hold the current status of the SLAM system.
                    let runMode = data.message.runMode;
                    // Print a message to the console with the value of this variable to see the current status of the SLAM system as the program runs.
                    console.log("runMode: " + runMode);
                    // Write a switch statement that checks the value of runMode. If it is equal to the string "Ready", break from the statement and do nothing ("Ready" is the initial state of the sensor). If it is equal to "Exploring", pose is obtained and Misty is ready to start driving around to collect map data. In this case we want to update ready to true to break the while loop within getMap() and continue execution in that function. 
                    switch (runMode) {
                        case "Ready":
                            break
                        case "Exploring":
                            ready = true;
                            break
                        // If runMode is equal to the string "Paused", update mapping to false. This will occur a few seconds after we issue the SlamStopMapping command.
                        case "Paused":
                            mapping = false;
                            break
                    }
                }
            }
            catch (e) {
            }
        }

        /* WEBSOCKET SUBSCRIPTION FUNCTIONS */
        // Create a function called subscribeSelfState() to subscribe to SelfState events.
        function subscribeSelfState() {
            // Call socket.Subscribe(). Pass `"SlamStatus"` for the `eventName` argument and "SelfState" for `msgType`. Pass `5000` for `debounceMS` to tell Misty to send a `SelfState` message every 5 seconds. Pass `null` for the `property`,`inequality`, and `value` arguments. For the `returnProperty` argument, enter the string "slamStatus" to trim the message to include only the desired SLAM status data. For `eventCallback`, pass `_SelfState` as the name of the callback function to run when you receive data from this subscription. 
            socket.Subscribe("SlamStatus", "SelfState", 5000, null, null, null, "slamStatus", _SelfState);
        }
        // Define unsubscribeSelfState() to unsubscribe from the SelfState event. Call socket.Unsubscribe() within the function, passing the string "SlamStatus" (the name given the SelfState event).
        function unsubscribeSelfState() {
            socket.Unsubscribe("SlamStatus");
        }


        /* COMMANDS */

        // Declare a function that sends the command to start mapping.
        async function startMapping() {
            // Use a while loop to run sleep() (for 500 milliseconds) repeatedly for as long as subscribed is set to false.
            while (!subscribed) {
                await sleep(500);
            }
            // update state
            mapping = true;
            // Use `axios.post()` to send a POST request to the endpoint for the `SlamStartMapping` command.
            axios.post("http://" + ip + "/api/alpha/slam/map/start");
            getMap();
        }

        // Define getMap() as an asynchronous function. getMap() will gather map data as Misty drives around her enviornment and return it to your program when she is done mapping.
        async function getMap() {
            // Create a while loop that runs sleep() repeatedly for as long as ready is set to false. This pauses execution until Misty has obtained pose.
            while (!ready) {
                await sleep(500);
            }
            // Log a message to the console to indicate that pose is obtained and Misty is ready to start collecting map data.
            console.log("2 - Pose obtained, starting mapping");
            // Use an alert to pause execution of the program and give Misty time to drive around collecting data. Execution of the program only continues once the user clicks OK.
            alert("Head over to the API explorer and drive Misty around the room to gather map data. Once finished, hit ok to proceed.");
            // Use axios.post() to send a POST request to the endpoint for the SlamStopMapping command.
            axios.post("http://" + ip + "/api/alpha/slam/map/stop");
            // Write another while loop to pause execution while mapping is true.
            while (mapping) {
                await sleep(500);
            }

            // Print a message to the console indicating the mapping process has stopped and the map is being obtained.
            console.log("3 - Mapping has stopped, obtaining map");

            // Use axios.get() to send a GET request to the endpoint for the SlamGetRawMap command. Use then() to call two new functions, unsubscribeSelfState() and processMap(). We use these commands to respectively unsubscribe from the event and generate a graphical map from the map data. Log any errors to the console within a catch() statement.
            axios.get("http://" + ip + "/api/alpha/slam/map/raw")
                .then((data) => {
                    unsubscribeSelfState();
                    processMap(data);
                })
                .catch((err) => {
                    console.log(err);
                })
        }
        // Define the processMap() function to isolate the map data.
        function processMap(res) {
            // Print a log message indicating we have recieved the map data.
            console.log("4 - Recieved map, processing map data");
            // Define a variable to store the map data sent with the response.
            let data = res.data;
            // Pass data into drawMap() to draw the map in the browser.
            drawMap(data)
        }

        /*** Map-drawing Code from API explorer ***/
        // Use this function from the API Explorer source code to create a graphic image of the map Misty generates.
        function drawMap(data) {
            var canvas = document.getElementById("mapCanvas");
            var context = canvas.getContext("2d");
            canvas.width = (data[0].result.width - 1) * pixelsPerGrid;
            canvas.height = (data[0].result.height - 1) * pixelsPerGrid;
            context.scale(pixelsPerGrid, pixelsPerGrid);
            data[0].result.grid.reverse().forEach(function (item) { item.reverse(); });
            for (var currentX = data[0].result.height - 1; currentX >= 0; currentX--) {
                for (var currentY = data[0].result.width - 1; currentY >= 0; currentY--) {
                    context.beginPath();
                    context.lineWidth = 1;
                    switch (data[0].result.grid[currentX][currentY]) {
                        case 0:
                            // "Unknown"
                            context.fillStyle = 'rgba(133, 133, 133, 1.0)'; // '#858585';
                            break;
                        case 1:
                            // "Open"
                            context.fillStyle = 'rgba(255, 255, 255, 1.0)'; // '#FFFFFF';
                            break;
                        case 2:
                            // "Occupied"
                            context.fillStyle = 'rgba(42, 42, 42, 1.0)'; // '#2A2A2A';
                            break;
                        case 3:
                            // "Covered"
                            context.fillStyle = 'rgba(102, 0, 237, 1.0)'; // 'rgba(33, 27, 45, 0.5)'; // '#6600ED';
                            break;
                        default:
                            context.fillStyle = '#ff9b9b';
                            break;
                    }
                    context.rect(currentY - 1 * pixelsPerGrid, currentX - 1 * pixelsPerGrid, pixelsPerGrid, pixelsPerGrid);
                    context.fill();
                }
            }
            alert("Skill finished! Successfully obtained and drew a map!");
        }

        // Open the connection to your robot.

        socket.Connect();
    </script>
</body>
</html>

Tutorial 5: Taking Pictures

This tutorial describes how to write a remote-running program for Misty that takes a photo with her 4K camera and saves it to her local storage when she detects a face in her field of vision. It teaches

  • how to subscribe to the ComputerVision WebSocket
  • how to engage with Misty’s face detection capabilities
  • how to use the TakePicture command to take photos with Misty’s 4K camera and save them to your robot
  • how to control the flow of a program to trigger commands when specific environmental circumstances are met

Setting Up Your Project

This project uses the Axios library and the lightSocket.js helper tool to handle requests and simplify the process of subscribing to Misty’s WebSocket connections. You can download this tool from our GitHub repository. Save the lightSocket.js file to a “tools” or “assets” folder in your project.

To set up your project, create a new HTML document. Give it a title and include references to lightSocket.js and a CDN for the Axios library in the <head> section. We write the code for commanding Misty within <script> tags in the <body> section of this document.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Remote Command Tutorial 5</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <!-- Include references to a CDN for the Axios library and the local path where lightSocket.js is saved in the <head> of your document -->
    <script src="https://unpkg.com/axios/dist/axios.min.js"></script>
    <script src="<local-path-to-lightSocket.js>"></script>
</head>
<body>
    <script>
    // The code for commanding Misty goes here!
    </script>
</body>

Writing the Code

Within <script> tags in the <body> of your document, declare a constant variable ip and set its value to a string with your robot’s IP address. We use this variable to send commands to Misty. Other global variables are declared later in the project.

/* GLOBAL */

// Declare a global variable ip and set its value to a string with your robot's IP address.
const ip = <robotipaddress>;

Next, define a function called sleep(). We use this function to help control the flow of the program. The sleep() function creates and returns a promise that resolves when setTimeout() expires after a designated number of milliseconds. Define sleep() beneath the global variables so it can be referenced throughout the program.

/*TIMEOUT */

// Define a helper function called sleep that can pause code execution for a set period of time.
function sleep(ms) {
    return new Promise(resolve => setTimeout(resolve, ms));
}

Create an new instance of LightSocket called socket. This instance of LightSocket takes as parameters the IP address of your robot and two optional callback functions (the first triggers when a connection is opened, and the second triggers when it’s closed). Pass the ip variable and a function called openCallback() to socket as the first and second parameters.

/* GLOBALS */

const ip = "<robotipaddress>";
// Create a new instance of LightSocket called socket. Pass as arguments the ip variable and a function named openCallback.
let socket = new LightSocket(ip, openCallback);

Declare the openCallback() function. Prefix the definition of openCallback() with the keyword async() to declare it as an asynchronous function and enable the use of the sleep() function.

/* CALLBACKS */

// Declare the openCallback() function. Prefix the definition of openCallback() with the keyword async() to declare it as an asynchronous function and enable the use of the sleep() function.
async function openCallback() {

}

To keep track of whether we are currently subscribed to a ComputerVision event, declare a global variable called subscribed near the global ip variable.

/* GLOBALS */

const ip = "<robotipaddress>";
let socket = new LightSocket(ip, openCallback);
// To keep track of whether we are currently subscribed to a ComputerVision event, declare a global variable called subscribed near the global ip variable.
let subscribed;

Set the value of subscribed to false in the beginning of the openCallback() function.

/* CALLBACKS */

async function openCallback() {
    // Set the value of subscribed to false to show that the subscription has not been established.
    subscribed = false;
}

Each time a picture is taken, we unsubscribe from the ComputerVision WebSocket, pause execution, and re-subscribe to the WebSocket. We do this to prevent Misty from taking dozens of pictures of the same person every time she detects a face. To manage this, we send a command to unsubscribe from the "ComputerVision" event before each attempt to establish a connection.

Inside openCallback(), call socket.Unsubscribe() and pass in the "ComputerVision" event name. After unsubscribing, call sleep() (prefixed with the keyword await) and pass in the value 8000. This tells the program to pause for 8 seconds, which is how long we want Misty to wait before re-subscribing to ComputerVision and sending more face detection event data.

/* CALLBACKS */

async function openCallback() {
    subscribed = false;
    // Unsubscribe from the ComputerVision event.
    socket.Unsubscribe("ComputerVision");
    // Pause execution while the event subscription ends.
    await sleep(8000);
}

Next, call socket.Subscribe(). The socket.Subscribe() method takes eight arguments. For more information about what each of these arguments does, see the documentation on using the lightSocket.js tool here.

socket.Subscribe(eventName, msgType, debounceMs, property, inequality, value, [returnProperty], [eventCallback])

When you call socket.Subscribe(), pass "ComputerVision" for the eventName argument, pass "ComputerVision" for msgType, pass 1000 for debounceMS, and pass "_ComputerVision" for eventCallback. Pass null for all other arguments.

/* CALLBACKS */

async function openCallback() {
    subscribed = false;
    socket.Unsubscribe("ComputerVision");
    await sleep(8000);
    // Call socket.Subscribe(). Pass "ComputerVision" for the eventName argument, pass "ComputerVision" for msgType, pass 1000 for debounceMS, and pass "_ComputerVision" for eventCallback. Pass null for all other arguments.
    socket.Subscribe("ComputerVision", "ComputerVision", 1000, null, null, null, null, _ComputerVision);

}

Use the keyword async to define the _ComputerVision() callback that runs when a ComputerVision event triggers. This function takes a data argument, which holds the data from the event message. Write code to print a message to the console each time the callback triggers, including the message response data.

// Use the keyword async to define the _ComputerVision() callback that runs when a ComputerVision event triggers. This function takes a data argument, which holds the data from the event message. 
async function _ComputerVision(data) {
    // Write code to print a message to the console each time the callback triggers, including the message response data.
    console.log("CV callback called: ", data);

When we establish a connection, we want to update the value of subscribed to reflect that we are subscribed to the event. Use an if statement to check if subscribed is false. If it is, set it to true.

async function _ComputerVision(data) {
    console.log("CV callback called: ", data);
    // When we establish a connection, we want to update the value of subscribed to reflect that we are subscribed to the event. Use an if statement to check if subscribed is false. If it is, set it to true.
    if (!subscribed) {
        subscribed = true;
    }
}

As Misty takes pictures of the faces she recognizes, we unsubscribe and re-subscribe to "ComputerVision". However, because it’s okay for face detection to remain active even when we are not subscribed to "ComputerVision" event messages, we only need to send the command to start face detection once. We can accomplish this by using a global variable called firstTime that we initialize with a value of true.

/* GLOBALS */
const ip = "<robotipaddress>";
// We only need to send the command to start face detection once. We can accomplish this by using a global variable called firstTime that we initialize with a value of true.
let firstTime = true;
let subscribed;
let socket = new LightSocket(ip, openCallback);

When the callback triggers, use an if statement to check if firstTime is true. If it is, send a POST request to the endpoint for the StartFaceDetection command. Use catch() to handle and log any errors you receive when sending the command. Set firstTime to false and leave it that way for the remainder of the program’s execution.

async function _ComputerVision(data) {
    console.log("CV callback called: ", data);
    if (!subscribed) {
        subscribed = true;
        // Use an if statement to check if firstTime is true. If it is, send a POST request to the endpoint for the StartFaceDetection command. Use catch() to handle and log any errors you receive when sending the command. Set firstTime to false and leave it that way for the remainder of the program’s execution.
        if (firstTime) {
            axios.post("http://" + ip + "/api/beta/faces/recognition/start")
                .catch((err) => {
                    console.log(err);
                });
            // Update firstTime to tell future callbacks the first callback has already occurred.
            firstTime = false;
        }
    }
}

The first message we receive when we subscribe to the ComputerVision WebSocket is a registration message that does not contain data relevant to our program. When the _ComputerVision() callback triggers for the first time, we want to send the command to start face detection, but we want to prevent execution of the rest of the code to avoid processing this registration message. To do this, within the if statement checking the value of subscribed, use return to exit the callback and take no further action.

async function _ComputerVision(data) {
    console.log("CV callback called: ", data);
    if (!subscribed) {
        subscribed = true;
        if (firstTime) {
            axios.post("http://" + ip + "/api/beta/faces/recognition/start")
                .catch((err) => {
                    console.log(err);
                });
            firstTime = false;
        }
        // Use return to exit the callback.
        return
    }
}

The rest of the callback function handles cases where relevant data comes through. This occurs whenever Misty detects a face in her field of vision. Because the program pauses each time a picture is taken, this section of the callback doesn’t execute more frequently than every 8 seconds.

To have Misty take a picture, use axios.get() to send a GET request to the endpoint for the TakePicture command. This endpoint accepts values for parameters that specify whether the image data should be returned as a Base64 string, what name the image file should be given, what size the image should be, whether to display the image on Misty’s screen, and whether to overwrite an image with the same file name if one exists on your robot. Read the documentation on this endpoint for detailed descriptions of these parameters. When you call axios.get(), pass in the endpoint for the TakePicture command as the first argument. For the second argument, pass in a params object with the following key, value pairs:

  • Set Base64 to null. This tells Misty not to return the image data as a base64 string.
  • Set FileName to the variable fileName. Declaring a value for this parameter tells Misty to save the photo to her file system. The photo is saved with a name that matches the value stored in the fileName variable, which is defined later in this project.
  • Set Width and Height to 1200 and 1600, respectively. These sizes match the resolution of the photo taken by the 4K camera.
  • Set DisplayOnScreen to false. We don’t want Misty to display these photos on her screen after she takes them.
  • Set OverwriteExisting to true so Misty overwrites any old images that have the same name as newly captured photos.
async function _ComputerVision(data) {
    console.log("CV callback called: ", data);
    if (!subscribed) {
        subscribed = true;
        if (firstTime) {
            axios.post("http://" + ip + "/api/beta/faces/recognition/start")
                .catch((err) => {
                    console.log(err);
                });
            firstTime = false;
        }
        return
    }
    // Use axios.get() to send a GET request to the endpoint for the TakePicture command. This endpoint accepts values for parameters that specify whether the image data should be returned as a Base64 string, what name the image file should be given, what size the image should be, whether to display the image on Misty’s screen, and whether to overwrite an image with the same file name if one exists on your robot.
    axios.get("http://" + ip + "/api/alpha/camera", {
        params: {
            Base64: null,
            FileName: fileName,
            Width: 1200,
            Height: 1600,
            DisplayOnScreen: false,
            OverwriteExisting: true
        }
    })
}

Use a then() method to log the response, as well as a message indicating the image has been saved with the specified file name.

async function _ComputerVision(data) {
    console.log("CV callback called: ", data);
    if (!subscribed) {
        subscribed = true;
        if (firstTime) {
            axios.post("http://" + ip + "/api/beta/faces/recognition/start")
                .catch((err) => {
                    console.log(err);
                });
            firstTime = false;
        }
        return
    }
    axios.get("http://" + ip + "/api/alpha/camera", {
        params: {
            Base64: null,
            FileName: fileName,
            Width: 1200,
            Height: 1600,
            DisplayOnScreen: false,
            OverwriteExisting: true
        }
    })
    // Use a then() method to log the response, as well as a message indicating the image has been saved with the specified file name.
        .then(function (res) {
            console.log(res);
            console.log("Image saved with fileName: '" + fileName + "'");
        });
}

We define fileName above this GET request. For this project, we want Misty to take pictures and save them with the date and time the photo was taken. To accomplish this, we use the JavaScript built-in Date object. Instantiate a new Date object, then call the method toLocaleString() to convert the date and time into a string. Windows systems omit certain characters from file names, so we need to use the replace() method and pass in some regular expressions to modify the string to an acceptable format and make it easier to read. (Note: This code is okay to leave in your program if you are running it on a Mac or Unix system.) These regular expressions replace semicolons with periods, replace spaces with underscores, remove commas, and append the file name with "_Face" to indicate that these are images of faces.

async function _ComputerVision(data) {
    console.log("CV callback called: ", data);
    if (!subscribed) {
        subscribed = true;
        if (firstTime) {
            axios.post("http://" + ip + "/api/beta/faces/recognition/start")
                .catch((err) => {
                    console.log(err);
                });
            firstTime = false;
        }
        return
    }
    // Define the name to save the image with. For this project, we use the JavaScript built-in Date object to save pictures with the date and time they were taken. Windows systems omit certain characters from file names, so we need to use the replace() method and pass in some regular expressions to modify the string to an acceptable format and make it easier to read. These regular expressions replace semicolons with periods, replace spaces with underscores, remove commas, and append the file name with "_Face" to indicate that these are images of faces.
    let fileName = new Date().toLocaleString().replace(/[/]/g, ".").replace(/[:]/g, ".").replace(/[ ]/g, "_").replace(",", "") + "_Face";

    axios.get("http://" + ip + "/api/alpha/camera", {
        params: {
            Base64: null,
            FileName: fileName,
            Width: 1200,
            Height: 1600,
            DisplayOnScreen: false,
            OverwriteExisting: true
        }
    })
        .then(function (res) {
            console.log(res);
            console.log("Image saved with fileName: '" + fileName + "'");
        });
}

After the GET request, call openCallback() to start the process over again. To catch and log errors, wrap a try, catch statement around the code block that defines the value of fileName, makes the GET request, and repeats the call to openCallback().

async function _ComputerVision(data) {
    console.log("CV callback called: ", data);
    if (!subscribed) {
        subscribed = true;
        if (firstTime) {
            axios.post("http://" + ip + "/api/beta/faces/recognition/start")
                .catch((err) => {
                    console.log(err);
                });
            firstTime = false;
        }
        return
    }
    // Wrap the GET request code block in a try, catch statement to catch and log errors.
    try {
        let fileName = new Date().toLocaleString().replace(/[/]/g, ".").replace(/[:]/g, ".").replace(/[ ]/g, "_").replace(",", "") + "_Face";
        axios.get("http://" + ip + "/api/alpha/camera", {
            params: {
                Base64: null,
                FileName: fileName,
                Width: 1200,
                Height: 1600,
                DisplayOnScreen: false,
                OverwriteExisting: true
            }
        })
            .then(function (res) {
                console.log(res);
                console.log("Image saved with fileName: '" + fileName + "'");
            });
        // Call openCallback to start the process over again
        openCallback();
    }
    catch (err) {
        console.log(err);
    }
}

At the end of the program, call socket.Connect() to open the connection to the websocket.

socket.Connect();

Congratulations! You’ve written a program for Misty to take a photo whenever she detects a face.

  • When the document loads, the program establishes a connection to the ComputerVision WebSocket.
  • Misty starts face detection and, each time she sees a face, takes a photo with her 4K camera.
  • These photos are saved to Misty’s local storage and given file names to indicate the date and time when the face was detected and the photo was taken.
  • The flow of the program is managed by global variables indicating the status of the WebSocket subscription and whether Misty has already started face recognition.
  • Each time a photo is taken, the program unsubscribes from the WebSocket, pauses for a few seconds, and then re-subscribes to the WebSocket connection to start the whole process over again.

Full Sample

See the full .html document for reference.

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title>Remote Command Tutorial 5</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <!-- Include references to a CDN for the Axios library and the local path where lightSocket.js is saved in the <head> of your document -->
    <script src="https://unpkg.com/axios/dist/axios.min.js"></script>
    <script src="<local-path-to-lightSocket.js>"></script>
</head>
<body>
    <script>
        /* GLOBALS */

        // Declare a global variable ip and set its value to a string with your robot's IP address.
        const ip = "<robotipaddress>";
        // We only need to send the command to start face detection once. We can accomplish this by using a global variable called firstTime that we initialize with a value of true.            
        let firstTime = true;
        // To keep track of whether we are currently subscribed to a ComputerVision event, declare a global variable called subscribed near the global ip variable.
        let subscribed;
        // Create a new instance of LightSocket called socket. Pass as arguments the ip variable and a function named openCallback.
        let socket = new LightSocket(ip, openCallback);


        /*TIMEOUT */

        // Define a helper function called sleep that can pause code execution for a set period of time.
        function sleep(ms) {
            return new Promise(resolve => setTimeout(resolve, ms));
        }

        /* CALLBACKS */
        // Declare the openCallback() function. Prefix the definition of openCallback() with the keyword async() to declare it as an asynchronous function and enable the use of the sleep() function.
        async function openCallback() {
            // Set the value of subscribed to false to show that the subscription has not been established.
            subscribed = false;
            // Unsubscribe from the ComputerVision event.
            socket.Unsubscribe("ComputerVision");
            // Pause execution while the event subscription ends.
            await sleep(8000);
            // Call socket.Subscribe(). Pass "ComputerVision" for the eventName argument, pass "ComputerVision" for msgType, pass 1000 for debounceMS,and pass "_ComputerVision" for eventCallback. Pass null for all other arguments.
            socket.Subscribe("ComputerVision", "ComputerVision", 1000, null, null, null, null, _ComputerVision);
        }
        // Use the keyword async to define the _ComputerVision() callback that runs when a ComputerVision event triggers. This function takes a data argument, to hold the data from the event message. 
        async function _ComputerVision(data) {
            // Write code to print a message to the console each time the callback triggers, including the message response data.
            console.log("CV callback called: ", data);
            // When we establish a connection, we want to update the value of subscribed to reflect that we are subscribed to the event. Use an if statement to check if subscribed is false. If it is, set it to true.
            if (!subscribed) {
                subscribed = true;
                // Use an if statement to check if firstTime is true. If it is, send a POST request to the endpoint for the StartFaceDetection command. Use catch() to handle and log any errors you receive when sending the command. Set firstTime to false and leave it that way for the remainder of the program’s execution.
                if (firstTime) {
                    axios.post("http://" + ip + "/api/beta/faces/recognition/start")
                        .catch((err) => {
                            console.log(err);
                        });
                    // Update firstTime to tell future callbacks the first callback has already occurred.
                    firstTime = false;
                }
                // Use return to exit the callback.
                return
            }

            try {
                // Define the name to save the image with. For this project, we use the JavaScript built-in Date object to save photos with the date and time they were taken. Windows systems omit certain characters from file names, so we need to use the replace() method and pass in some regular expressions to modify the string to an acceptable format and make it easier to read. These regular expressions replace semicolons with periods, replace spaces with underscores, remove commas, and append the file name with "_Face" to indicate that these are images of faces.
                let fileName = new Date().toLocaleString().replace(/[/]/g, ".").replace(/[:]/g, ".").replace(/[ ]/g, "_").replace(",", "") + "_Face";
                // Use axios.get() to send a GET request to the endpoint for the TakePicture command. This endpoint accepts values for parameters that specify whether the image data should be returned as a Base64 string, what name the image file should be given, what size the image should be, whether to display the image on Misty’s screen, and whether to overwrite an image with the same file name if one exists on your robot.
                axios.get("http://" + ip + "/api/alpha/camera", {
                    params: {
                        Base64: null,
                        FileName: fileName,
                        Width: 1200,
                        Height: 1600,
                        DisplayOnScreen: false,
                        OverwriteExisting: true
                    }
                })
                    // Use a then() method to log the response, as well as a message indicating the image has been saved with the specified file name.
                    .then(function (res) {
                        console.log(res);
                        console.log("Image saved with fileName: '" + fileName + "'");
                    });
                // Call openCallback to start the process over again
                openCallback();
            }
            catch (err) {
                console.log(err);
            }
        }

        socket.Connect();
    </script>
</body>
</html>

Working with the API Explorer Code

You can use the code and examples in the API Explorer download package to help you build skills.

index.html & default.css

These files contain the user interface and styles for the Misty API Explorer.

SampleUI.js

This file defines the handlers for the index.html page events. Use SampleUI.js to see examples of all of the event listeners linked to the various buttons rendered in index.html. For example, Select a mood or Change LED.

MistyRobot.js

This file builds the server URL based on the robot you're interacting with. It provides a wider and more user-friendly range of commands than MistyAPI.js.

SampleUI.js calls MistyRobot.js to processes user actions by sending commands through MistyAPI.js and MistyWebSocket.js.

MistyAPI.js

This file is a one-to-one wrapper for most of Misty's API endpoints. It constructs payloads to pass to MistyAjax.js. You can call it directly once you have created a new MistyRobot by inputting the robot's IP address, port, and verbose level.

MistyWebSocket.js

This file allows you to subscribe to and unsubscribe from Misty's WebSockets.

MistyAjax.js

A simple wrapper for Ajax GET and POST requests, this file sends Ajax calls to Misty.