ifm O2D500 object detection sensor

The O2D500 object detection sensor enables maximum quality assurance through a combinable 2D inspection of contours and surfaces.

1. Introduction

The sensor can be configured very easily using the "ifm Vission Assistant" program. The following detection models are available: BLOB analysis, contour detection and contour position tracking. It is mainly suitable for inspection tasks. Since firmware 1.27.9941, it can also be used for object localization in conjunction with a robot.

Communication with the sensor is possible via digital IOs or TCP/IP. In most cases, digital triggering and digital feedback from the sensor regarding the test results are sufficient for testing tasks. If it is used for object localization, the object coordinates can be transmitted to horstFX via TCP/IP.

The following article describes communication using digital IOs and TCP/IP. Communication via TCP/IP is demonstrated using an example program in which the robot detects and grasps objects using the camera. The horstFX program and the necessary interface configuration for the ifm Vission Assistant can be downloaded at the end of the article.

Release notes:
Camera firmware version: 1.28.10310
ifm Vission Assistand Version 2.5.26.0
horstFX Version 2022.07

2. Setting up the system

The setup and wiring of the sensor is described very well in the ifm Vission Assistant program and the sensor user manual. These can be downloaded from the ifm homepage.

In order for the ifm Vission Assistant to find the camera, the PC and the camera must be in the same area of the network. By default, the sensor is delivered with the IP address 192.168.0.69. So that the PC can communicate with the camera, the PC can have the following IP address, for example: 192.168.0.1

3. calibration

The system can be calibrated in the ifm Vission Assistant when creating an application under the Images and triggers tab in the application configuration. The wizard guides you through the calibration process in just a few steps. Different calibration methods are available for each application. If, for example, only parts are to be checked, one of the "measurement calibrations" is sufficient. If object positions are to be approached with the robot (i.e. parts are to be gripped), the robot sensor calibration can be carried out. This calibration method is used for the example in this article.

When teaching in an object, it is recommended to grip the object with the robot and approach a rotation value in RZ of 0�. The object is then placed under the camera at this angle and can be taught in this position. The ifm Vission Assistant also assigns an RZ orientation of 0� to the object during teach-in. In the event of subsequent rotational changes in the real process, the orientation change corresponds directly with the processing in horstFX.

4. communication

Communication with the sensor can be digital or via TCP/IP. Both types of communication are described below:

4.1 Digital communication

Digital communication will be sufficient for most testing tasks. A suitable overview of the respective connection cable can be called up in the ifm Vission Assistant to help with the wiring.

4.1.1 Digital triggering

In order to trigger the sensor digitally, the trigger mode must be set to Positive edge in the ifm Vission Assistant when creating an applic ation. The React to all triggers button must also be selected in the window.

The trigger can now be activated by switching the appropriate digital output in horstFX.

output_literal( "OUTPUT_1", 1.0 );
sleep(100);
output_literal( "OUTPUT_1", 0.0 );

4.1.2. Digital Result Transmission

In the ifm Vision Assistant, a logic configuration can be created in the Logic tab. This allows for assigning appropriate states to digital outputs.

4.2. TCP/IP communication

In addition to digital communication, more complex scenarios can involve communicating with the camera using TCP/IP. To achieve this, the robot needs to be connected to the RJ45 socket of the robot control cabinet via a network cable. It is important that the IP addresses of both participants are in the same range. In the ifm Vision Assistant, network and interface settings can be adjusted under the Device Configuration tab. For instructions on how to set the robot's IP address, refer to the article on changing IP addresses. In a given scenario, the camera has the IP address 192.168.2.15, while the robot has the IP address 192.168.2.2.

4.2.1. Establishing the Connection

The two participants communicate with each other using network sockets. For more detailed information, refer to the section on Sockets.

To establish a connection with the camera in horstFX, the following commands can be used:

var IP = "192.168.2.15";                // IP ifm Sensor
var Port = 50010;                       // Port ifm Sensor
var Terminator = "\r\n";                // Zeilenumbruch Carriage Return + Linefeed beim Schreiben einer Nachricht
var Delimiter = "\r\n".charCodeAt(0);   // Zeilenumbruch Carriage Return + Linefeed beim Einlesen einer Nachricht
 
try
{
    var socket = new java.net.Socket();
    socket.connect(new  java.net.InetSocketAddress(IP, Port), 10000); //10s timeout
    showHint("Connection to the sensor established");
 
    // 10 sek Timeout beim Einlesen der Nachricht ueber einen Socket
    socket.setSoTimeout(10000);
}
catch (e)
{
    show_info("A connection with the ifm sensor could not be established!");
}
 
try
{
    // Programmcode
    // ...
}
 
finally
{
  // Close Socket
    socket.close();
}

The IP address and port of the camera may need to be adjusted in the settings of the ifm Vision Assistant. The delimiter and terminator describe the end of a command to or from the sensor.

horstFX attempts to establish a connection with the sensor using a timeout of 10 seconds. If a connection cannot be made within this time, an information message will be displayed.

The actual program code is located within the try{} block. In case of an error that terminates the program, the finally{} block ensures that the socket is properly closed.

4.2.2. Compilation of TCP/IP commands

The composition of TCP/IP commands for communicating with the camera is detailed in the ifm programmer's guide, available for download from the ifm website.

ifm offers two different protocols, which can be configured under the Devices Configuration ? Interfaces tab. The default setting is Version 3, which will be discussed further in the following sections.

ifm_TCP_Protokoll

In Protocol V3, a TCP/IP command consists of 2 sub-commands. This protocol, used for machine-to-machine communication, supports asynchronous commands. The first sub-command includes a ticket number (a character string of 4 numbers between 1000 and 9999) and the length (a character string starting with 'L' followed by 9 numbers) of the subsequent command. The second sub-command also contains the same ticket number along with the actual command to the camera. Both sub-commands are terminated with a carriage return and line feed. <Ticket><Length>CR LF<Ticket><Content>CR LF

The command for triggering the camera appears as follows:

Command 1: <1234><L000000007>CR LF 

Command 2: <1234><t>CR LF

4.2.3. Triggering the camera via TCP/IP

To trigger the camera via TCP/IP, you need to set the Trigger mode to Process Interface in the application configuration of the ifm Vision Assistant under the Images & Triggers tab. Additionally, make sure to select the option to respond to all triggers. 

ifm_Trigger_TCP

Next, we will explain how the above command can be sent from horstFX to trigger the camera.

To send a message to the camera, the following function can be used.

var Terminator = "\r\n";        // Zeilenumbruch Carriage Return + Linefeed beim Schreiben einer Nachricht
 
// Nachricht ueber Socket schreiben
function writeMessage(socket, message)
{
    var printWriter =
        new java.io.PrintWriter(
            new java.io.OutputStreamWriter(
                socket.getOutputStream()));
    printWriter.print(message += Terminator);  
    printWriter.flush();
}

As mentioned earlier, a camera command consists of 2 sub-commands. Therefore, the above function is called twice consecutively using the following lines of code:

// Ticketnummer "1234" + Length of the next command "L000000007"
var Message = "1234L000000007";
writeMessage(socket, Message);
     
// Trigger Command: Ticket number "1234" along with command "t"
var Message = "1234t";
writeMessage(socket, Message);

For the camera to respond to a trigger command, it must be in either test or run mode.

4.2.4. Transmission of results via TCP/IP

The camera's output character string can be generated in the ifm Vision Assistant during application configuration on the Interfaces tab. You can download the configuration from this example at the end of this article and import it into the ifm Vision Assistant.

image2022-12-21_15-40-55

The string in this example follows a specific format:

star;"Number of contour matches";"Object coordinate X";"Object coordinate Y;"Object coordinate Z";"Object angle RX";stop"

If an object is detected, the string will look like this:

star;1;0.000;0.000;0.000;2.63;stop

If no object is detected, the string will look like this: star;0;stop

Now, let's discuss how the camera response in horstFX can be read and utilized.

To read a message, the following function is used. It reads a camera message up to the carriage return and line feed.

var Delimiter = "\r\n".charCodeAt(0);   // Zeilenumbruch Carriage Return + Linefeed beim Einlesen einer Nachricht
 
// Nachricht ueber Socket lesen
function readMessage(socket, msgSperator)
{
    var ByteHelper = Java.type("de.fruitcore.robot.core.util.ByteHelper");
    var ByteArray = Java.type("byte[]");
    var dIn = new java.io.DataInputStream(socket.getInputStream());
    var full_msg = new ByteHelper();;   
    var buffer = new ByteArray(1);
         
    while(1)
    {
            dIn.readFully(buffer, 0, buffer.length);
            full_msg.putByte(buffer[0]);
            if(msgSperator == buffer[0])
        {
                    break;
            }
    }
    var message = new java.lang.String(full_msg.toBytes(), java.nio.charset.StandardCharsets.UTF_8);
    return message.substring(0, message.length - 1);
}

When utilizing the asynchronous protocol, the camera responds to a command with two replies. The first includes the ticket number and the length of the subsequent response. The second reply contains the ticket number and a character indicating the success or failure of the sent command. A '*' signifies a successful command transmission. In horstFX, sent camera commands can be checked using the following code lines or by calling the following function.

// Ueberpuefe gesendete Nachricht
function checkSentMessage()
{
    var CamMsg = null;
 
    // Ticketnummer und Laenge in Bytes der Kameraantwort:
    cam_msg = readMessage(socket,Delimiter);
    show_info("Ticketnummer und Laenge der Kameraantwort: " + CamMsg);
 
    // Ticketnummer der Kameraantwort. Bei "*" war der Befehl erfolgreich
    CamMsg = readMessage(socket,Delimiter);
    show_info("Ticketnummer und Erfolgsmeldung: " + CamMsg);
 
    CamMsg = CamMsg.toString();
    if (CamMsg.indexOf("*") == -1)
    {
      show_info("The command sent to the camera was unsuccessful. Is the camera in Run or Test mode? \nTicket number + Error: " + CamMsg);
      // Close the socket
        socket.close();
      // Terminate the program
        exitRobotScript();
    }
}

In case of a failure, the socket is closed in this example, and the program is terminated. This should be adjusted to fit the specific application.

Additionally, depending on the command sent, the camera responds with further information. For example, when the trigger command is sent, the camera also sends the predefined string containing object coordinates. This response consists of two parts: the first part includes the length of the subsequent camera response, while the second part contains the predefined string. The following code lines can be used to read these responses in horstFX:

// Lese Kameradaten aus. Notwendig sobald ein Befehl gesendet wird, der eine Antwort von der Kamera binhaltet
function getCameraData()
{
    // Laenge der Kameraantwort in Bytes
    var CamMsg = readMessage(socket,Delimiter);
    show_info("Laenge der Kameraantwort: " + CamMsg);
 
    // Eigentliche Kameraantwort: Abhaengig von der Schnittstellenkonfiguration im ifm Vision Assistant
    var camResult = readMessage(socket,Delimiter);
    show_info(camResult);
 
    return camResult;  
}

The actual camera response, in this example containing the object coordinates, is stored in the variable camResult and returned by the function.

5.  Data Processing

Next, we will explain how the camera data received via TCP/IP is formatted appropriately and how the object position can be approached afterwards.

5.1. Obtaining the correct object coordinates

In order to process the camera data that has been read, it needs to be formatted to fit the appropriate format for horstFX. This is achieved through the following function.

// Erhalte Objektkoordinaten
function getObjectCoordinates()
{
    //Ueberpuefe gesendete Nachricht
    checkSentMessage();
    // Erhalte Kameradaten
    var CamReport = getCameraData();
 
    // Aufsplitten der Kameranachricht
    // Abhaengig von der Schnittstellenkonfiguration im ifm Vision Assistant.
    // Bei dieser Konfiguration: start;AnzahlKonturuebereinstimmung;X;Y;Z;Rz
    var SplittedData = CamReport.split(";");
    var Result = parseFloat(SplittedData[1]);       // Result = 1 bei Treffer, Result = 0 bei keiner Uebereinstimmung
     
    if (Result == 0)
    {
        // Kein Objekt erkannt
        showHint("Kein Objekt erkannt");
        return {Result: Result};
    }  
 
    else
    {
        var ObjectX = parseFloat(SplittedData[2])/1000; // Objektkoordinate X
        var ObjectY = parseFloat(SplittedData[3])/1000; // Objektkoordinate Y
        var ObjectZ = parseFloat(SplittedData[4])/1000; // Objektkoordinate Z
        var ObjectRz = parseFloat(SplittedData[5]); // Orientierung Rz
         
        //show_info("Cam X: " + ObjectX + "\nCam Y: " + ObjectY + "\nCam Z: " + ObjectZ + "\nCam Rz : " + ObjectRz);
 
        if (UseFixedObjectHeight == 1)
        {
            ObjectZ = ObjectHeight;
        }
 
        // Falls fest vorgegebene Rz Orientierung genutzt werden soll. Z.b. bei runden Objekten
        if (UseFixedRZCoord == 1)
        {
            ObjectRz = FixedRZCoord;
        }
 
        // Bildung des Normalenvektors (notwendig wenn Orientierung in RZ oder RX vorgegeben wird)
        var ObjectRx = FixedRXCoord;
        var ObjectRy = FixedRYCoord;
        Normal = getSurfaceNormalByEuler(ObjectRx, ObjectRy, ObjectRz);
 
        return {Result: Result, ObjectX: ObjectX, ObjectY: ObjectY, ObjectZ: ObjectZ, ObjectRx: ObjectRx, ObjectRy: ObjectRy, ObjectRz: ObjectRz}
    }
}

First, the sent camera message is checked for success using the described functions, and the camera data is read. Then, it is verified whether an object is detected by the camera. If so, the object coordinates in X, Y, Z, and the orientation around RZ are formatted appropriately. If the objects have an angle in RX or RY, the normal vector is formed. This allows the support point, located above the object point, to be approached with the same orientation. The linear movement to the final gripping position is thus already in the correct orientation.

5.2.  Approaching the Object Positions

The following code explains how the object position can be approached using a support point.

// Fahre Stuetzpunkt an
            move({
                    'Coord': 'CARTESIAN_BASIS',
                    'MoveType': 'JOINT',
                    'PoseRelation': 'ABSOLUTE',
                    'anyconfiguration': false,
                'blendradius.orient': 180.0,
                    'blendradius.xyz': 0.02,
                    'speed.ratio': 1.0,
                    'target': {'xyz+euler': [CamData.ObjectX - (moveFaktor*Normal.getX()), CamData.ObjectY - (moveFaktor*Normal.getY()), CamData.ObjectZ - (moveFaktor*Normal.getZ()), CamData.ObjectRx, CamData.ObjectRy, CamData.ObjectRz]},
                    'tool': 'No Tool'
            }, "Stuetzpunkt");
 
            // Fahre Objektposition an
            move({
                    'Coord': 'CARTESIAN_BASIS',
                    'MoveType': 'LINEAR',
                    'PoseRelation': 'ABSOLUTE',
                    'anyconfiguration': false,
                    'speed.ratio': 0.5,
                    'target': {'xyz+euler': [CamData.ObjectX, CamData.ObjectY, CamData.ObjectZ, CamData.ObjectRx, CamData.ObjectRy, CamData.ObjectRz]},
                    'tool': 'No Tool'
            }, "Objektposition");  
             
            // Schliesse Greifer
            closeGripper();
 
            // Fahre Stuetzpunkt an
            move({
                    'Coord': 'CARTESIAN_BASIS',
                    'MoveType': 'JOINT',
                    'PoseRelation': 'ABSOLUTE',
                    'anyconfiguration': false,
                'blendradius.orient': 180.0,
                    'blendradius.xyz': 0.02,
                    'speed.ratio': 1.0,
                    'target': {'xyz+euler': [CamData.ObjectX - (moveFaktor*Normal.getX()), CamData.ObjectY - (moveFaktor*Normal.getY()), CamData.ObjectZ - (moveFaktor*Normal.getZ()), CamData.ObjectRx, CamData.ObjectRy, CamData.ObjectRz]},
                    'tool': 'No Tool'
            }, "Stuetzpunkt");

6. Complete Program 

The following section illustrates the complete program of a sample program. In this scenario, the robot repetitively grasps an object detected by the camera and then places it at a predefined position. You can download the complete horstFX program at the end of the article.

//*************************************** Variablendeklaration ***************************************
 
// Objekterkennung
var UseFixedObjectHeight = 1;           // Falls vorgegebene Objekthoehe -> Wert = 1. Ansonsten wird bei ifm Kalibirerung genutzte Hoehe verwendet
var ObjectHeight = 0.310;               // Hoehe Z-Achse mit welcher Objekt gegriffen wird falls Variable UseFixedObjectHeight = 1. Ansonsten wird bei ifm Kalibirerung genutzte Hoehe verwendet
var UseFixedRZCoord = 0;                // Wert = 1 falls keine RZ Orientierung von Kamera vorgegeben werden soll. Z.b. bei runden Objekten
var FixedRXCoord = 180;                 // Vorgegebene RX-Orientierung in Grad mit welcher Teile gegriffen werden. Standard = 180 -> Flansch parallel zur Robotermontageflaeche
var FixedRYCoord = 0;                   // Vorgegebene RY-Orientierung in Grad mit welcher Teile gegriffen werden. Standard = 0 -> Flansch parallel zur Robotermontageflaeche
var FixedRZCoord = 0;                   // Vorgegebene RZ-Orientierung in Grad mit welcher Teile gegriffen werden. Falls "UseFixedRZCoord" = 1
var moveFaktor = 0.05;                  // Entfernung des Anfahrpunktes zu Pickposition. Wird mittels Faktor von Normalenvektor errechnet
var Normal = null;                      // Normalenvektor der Abgreifposition
var CamData = null;
 
// Applikationsnummer
var ApplicationNumber = "01";           // Applikationsnummer im ifm VissionAssistant: Muss zwingend aus 2 Zahlen bestehen
 
// TCP/IP Socket Communication
var IP = "192.168.2.15";                // IP ifm Sensor
var Port = 50010;                       // Port ifm Sensor
var Terminator = "\r\n";                // Zeilenumbruch Carriage Return + Linefeed beim Schreiben einer Nachricht
var Delimiter = "\r\n".charCodeAt(0);   // Zeilenumbruch Carriage Return + Linefeed beim Einlesen einer Nachricht
 
//********************************************* Programm *********************************************
 
// Aufbau der Verbindung zum ifm Sensor
try
{
    var socket = new java.net.Socket();
    socket.connect(new  java.net.InetSocketAddress(IP, Port), 10000); //10s timeout
    showHint("Connection to the sensor established");
 
    // 10 sek Timeout beim Einlesen der Nachricht ueber einen Socket
    socket.setSoTimeout(10000);
}
catch (e)
{
    show_info("A connection with the ifm sensor could not be established!");
}
 
// Startpunkt
move({
    'Coord': 'JOINT',
    'MoveType': 'JOINT',
    'PoseRelation': 'ABSOLUTE',
    'anyconfiguration': false,
    'blendradius.orient': 180.0,
    'blendradius.xyz': 0.06,
    'speed.ratio': 1.0,
    'target': {'joints': [23.027008, 8.307049, 33.134277, 0.000000, 48.596222, 180]},
    'tool': 'No Tool'
}, "Startpunkt");
 
// Initialisiere Variablen
initVars();
 
// oeffne Greifer
openGripper();
 
try
{
    // Aktiviere Applikation
    activateApplication(ApplicationNumber);
 
    while (true)
    {
 
        // trigger Kamera
        triggerCamera();
 
        // Erhalte Objektkoordinaten
        CamData = getObjectCoordinates();
 
        if (CamData.Result != 0)   
        {
            // Fahre Stuetzpunkt an
            move({
                    'Coord': 'CARTESIAN_BASIS',
                    'MoveType': 'JOINT',
                    'PoseRelation': 'ABSOLUTE',
                    'anyconfiguration': false,
                'blendradius.orient': 180.0,
                    'blendradius.xyz': 0.02,
                    'speed.ratio': 1.0,
                    'target': {'xyz+euler': [CamData.ObjectX - (moveFaktor*Normal.getX()), CamData.ObjectY - (moveFaktor*Normal.getY()), CamData.ObjectZ - (moveFaktor*Normal.getZ()), CamData.ObjectRx, CamData.ObjectRy, CamData.ObjectRz]},
                    'tool': 'No Tool'
            }, "Stuetzpunkt");
 
            // Fahre Objektposition an
            move({
                    'Coord': 'CARTESIAN_BASIS',
                    'MoveType': 'LINEAR',
                    'PoseRelation': 'ABSOLUTE',
                    'anyconfiguration': false,
                    'speed.ratio': 0.5,
                    'target': {'xyz+euler': [CamData.ObjectX, CamData.ObjectY, CamData.ObjectZ, CamData.ObjectRx, CamData.ObjectRy, CamData.ObjectRz]},
                    'tool': 'No Tool'
            }, "Objektposition");  
             
            // Schliesse Greifer
            closeGripper();
 
            // Fahre Stuetzpunkt an
            move({
                    'Coord': 'CARTESIAN_BASIS',
                    'MoveType': 'JOINT',
                    'PoseRelation': 'ABSOLUTE',
                    'anyconfiguration': false,
                'blendradius.orient': 180.0,
                    'blendradius.xyz': 0.02,
                    'speed.ratio': 1.0,
                    'target': {'xyz+euler': [CamData.ObjectX - (moveFaktor*Normal.getX()), CamData.ObjectY - (moveFaktor*Normal.getY()), CamData.ObjectZ - (moveFaktor*Normal.getZ()), CamData.ObjectRx, CamData.ObjectRy, CamData.ObjectRz]},
                    'tool': 'No Tool'
            }, "Stuetzpunkt");
 
            // Ablageposition
            move({
                    'Coord': 'JOINT',
                    'MoveType': 'JOINT',
                    'PoseRelation': 'ABSOLUTE',
                    'anyconfiguration': false,
                    'speed.ratio