Baumer - XF200

The XF200M03W10EP from Baumer is a vision system for 2D part localization. The camera can be accessed and set up using the VeriSens Application Suite software.

1. Introduction

Once the camera is set up, the VeriSens software is no longer needed, and horstFX can communicate directly with the camera. The sensor system can be mounted either fixed next to the robot or directly on the robot's flange with a suitable bracket. This article will guide you through setting up and calibrating the sensor system. For the examples here, a small elephant will be used for detection.

2. Setting up VS XF200

2.1. Camera installation

New cameras come with a default IP address. If needed, this IP address can be temporarily changed when starting the software. However, you can customize this in the settings.

Once the laptop and camera are on the same network, you can access the camera using the VeriSens Application Suite software. A simple web interface is available and can be accessed by entering the IP address in a web browser.

In the device settings, make sure to set the protocol to TCP and disable end detection.

2.2. Calibrating the camera

First, the camera should be set to the parameterize mode, where all settings can be adjusted.

In the "Image Settings - Image Capture" menu, you can then capture initial images for testing. Additionally, you can adjust settings for exposure time, resolution, and edge detection.

Once the camera is aligned, it needs to be calibrated.

To do this, switch to the "Coordinates" section.

For calibration, a known pattern with distances is required. In this example, it is the millimeter paper that can be seen in the top left corner of the image.

To ensure a precise calibration, place 4 points on the pattern, whose positions can be accurately specified in millimeters. The values for the X and Y positions are provided below. The first point serves as the reference point, with all subsequent values referencing this point.

Upon successful calibration, the four points will illuminate in green.

In the "Focusing" subsection, the frame for the focus area can be adjusted.

2.3. Characteristics and alignment

Next, switch to the "Check Features" section.

Here, you can add and adjust various features. In this case, start by adding a contour tracking feature. Use the "Rectangle" shape to mark the area to teach and adjust settings for contrast transitions or edge accuracy. If there are multiple parts on the field, you can activate the search area and add another field where the parts will be searched. In this scenario, select the entire camera area. Press "Teach" on the right. The matching should be at 100%, and you can adjust the threshold with the slider.

After successfully teaching, you can switch to the Model Editor to enhance contour recognition. In this case, the "eye" was removed.

During the evaluation, you can now display the center of the contour. Unfortunately, in this case, it is not very practical as the elephant is not symmetrical. To accurately determine the position and rotation, a reference point aligned with the contour is needed.

For this, a geometric point is placed at the foot of the elephant and set to "Relative to Reference" in the calculation methods. On the right side, "Position Tracking on Contour 1" must be selected. Once the point and rotation are precisely set, you can teach the point and then move the sliders all the way to the edge so that every position and rotation is allowed. The now taught point serves as the absolute zero point and acts as the reference for the robot.

Next, the robot must move its TCP to the reference point.

The robot's position must be saved as it will be needed later for estimating the position of objects.

2.4. Teach-in interface

In the "Interface configuration" menu, you can adjust the parameters to be transmitted.

At the beginning, you can enter a start identifier (FX) for transmission. Additionally, you can customize the input for separation and ending. Along with object matching, the X and Y positions, as well as the rotation, are transmitted. The right field shows the set data transmission.

To utilize the camera, you need to activate it, save the job, and transfer it to the camera.

3. Testing

Using a simulated PLC in the program itself allows for checking the commands and received data. Here, it can be observed that the object still remains precisely in the position where it was taught.

In the following scenario, the object was moved 20mm to the right.

In the following image, the object has been rotated by 90 degrees.

A summary of all commands can be found in the camera's documentation.

4. Camera Control

An example application for horstFX is included in the file Baumer_Vision.js.

4.1. Example Program

The attached program demonstrates a simple example of how to communicate with the camera.

The following section explains the basic functions for communicating with the sensor system from horstFX.

4.1.1. Establishing a Connection to the Sensor System

A connection from horstFX to a PLOC2D is established through a socket. Depending on how the network is set up, the IP address and port may need to be adjusted if necessary.

function initCamera() {
        return new java.net.Socket("192.168.178.1", 23);

4.1.2. Send data to the sensor system

Using a PrintWriter, data can be sent to the sensor system to, for example, capture a new image and analyze it.

function writeToSocket(nachricht) {
   var printWriter =
        new java.io.PrintWriter(
             new java.io.OutputStreamWriter(
                  socket.getOutputStream()));
          printWriter.print(nachricht);
          printWriter.flush();
}

writeToSocket("TR"); // Trigger to capture an image
sleep(100);
writeToSocket("GD"); // Initiate data transmission (GetData)

4.1.3. Read data from the sensor system

Using a BufferedReader, data sent by the sensor system can be read and processed. The data from the sensor system is received as a string, allowing for further manipulation. In this code snippet, the data is displayed in an information message on the horstPANEL.

function readFromSocket() {
     var bufferedReader =
          new java.io.BufferedReader(
               new java.io.InputStreamReader(
                    socket.getInputStream()));
     var charArrayType = Java.type("char[]");
     var buffer = new charArrayType(1000);
     var anzahlZeichen = bufferedReader.read(buffer, 0, 1000);
     var nachricht = new java.lang.String(buffer);
     return nachricht;
}

var cam_result = readFromSocket();

show_info(cam_result);

5. Example Program

Baumer_Vision