Semantic Programming:

Intuitive workflow using natural language

Our integrated hardware and VISION IN MOTION technology are made accessible to people who have never operated a robot before, thanks to the VIM-303 camera’s easy to use QuickPick-303 web interface.

The Vision Guided Robotic Workcell is programmed using Blockly, a graphical programming language developed by Google.

Programming a Vision Guided Workcell using VIM-303 follows 3 easy steps:

  1. Define Objects

  2. Define Waypoints

  3. Define Program

Define Objects

Simply place an object within the field of view of the camera. The camera detects the object and measures its dimensions. Type the name of the object and it has now been learned. Use it by name in future Blockly programs, for example to perform different actions based on the type of object.

Define Objects

Define Waypoints

Waypoints are locations where the robot will move during a Blockly program. Jog the robot using the arrow buttons, or Free Drive the robot by moving the robot to a location with your hands. Type a name for the Waypoint. Visual Waypoints can be defined based on an object the camera sees within its field of view.

Define Waypoints

Define Program using Blockly

Select action blocks from a menu. Move to Waypoints by name (such as PickZone). Pick an object by name (such as AluminumBlock) or pick any object within the field of view. Place objects onto defined Waypoints (such as WorkBench). Create programs in seconds using natural language, the way a person would be taught a task.

Visual Stacking

More complex programs can be easily created. This program stacks 4 decks of cards on top of a box. A variable TopOfStack is set to the visual location of a Box in the field of view. A deck of cards is picked and then placed on top of the box. The location of the deck of cards on top of the box is visually located and the process is repeated until all the cards are stacked on top of one another on top of the box.

Cards about to be stacked

After first cards are stacked on box

Visual Sorting

Objects can be picked and sorted by type. Specific objects can be picked, or all Known objects, or any object within the field of view of the camera. Based on which object the robot picked, a unique task can be performed, such as placing a Box on a BoxConveyor or a deck of Cards on a CardsConveyor.

The power of VIM-303 is easy to harness with QuickPick-303 and Blockly.

Robot manufacturer, ABB, studied various graphical programming languages and selected Blockly as the most intuitive. Read their paper Blockly Goes to Work to learn more.

Other Programming Methods

In addition to Blockly, the VIM-303 Camera can also be programmed in Python and PolyScope.

Programming in Python

Blockly programs are translated by the Camera into Python. Advanced users can program the Camera from a PC using Python.

Programming in PolyScope

The Camera can also be controlled from PolyScope, the native programming language of the Universal Robot. Connect to the camera via PolyScript commands via xmlrpc. Add visual picking to your existing blind PolyScope programs.