The RB-500 is a 3D sensor from Keyence for bin-picking applications. The range of functions of this camera system is very large, which is why only the rudimentary functions are described in this example.
1. Setup
The sensor comes with its own controller (CV-X Series) running the Vision Software 3D Vision-Guided Robotics. The software is operated via mouse and screen, which are directly connected to the controller. Within the software, the connection to the robot is configured, calibration is performed, and the respective application (KLT, workpiece, gripper) is taught. Optionally, the controller can also handle the complete path planning, provided that the robot system is integrated and kinematized (HORST is currently not available).
1.1. Establishing Connection
A detailed description of setting up the camera system can be found in the appendix (Start-up Explanation.pdf). To communicate via TCP/IP, HORST and the CV-X Controller must be on the same network. The IP address of the controller can be manually assigned or automatically obtained via DHCP. The port should be set to 8500, and the delimiter at the end of each string should be set to CR (Carriage Return).
By following these steps, you can successfully establish a connection between the camera system and the controller for seamless operation.
1.2. Calibration
In the first step, the basic calibration of the camera is carried out. This involves placing the provided calibration plate on the working surface under the camera, where it is automatically detected by the software.
The hand-eye calibration is done using a calibration jig that is securely attached to the robot flange in an eccentric position.
With this calibration jig, a 3x3x3 calibration matrix is traversed, with adjustable dimensions. The matrix should be chosen to roughly match the working space of the application.
The calibration process involves manually traversing the calibration matrix point by point, where the camera detects the respective jig positions and the robot's flange coordinates are entered into the camera software. Finally, the center point of the flange is calculated in relation to the calibration jig by tilting the flange at defined (adjustable) angles, which are then recognized by the camera. By comparing the provided and detected angles, the software calculates the precise position and orientation of the flange center point.
2. Teach-in application
Next, the application needs to be defined. This involves teaching workpieces, KLT, and grippers either optically or by providing STL data.
2.1. Search
The workpieces to be recognized can be either imported as an STL file or taught optically. For optical teaching, the workpieces are placed in various positions under the camera and automatically detected.
Next, the KLT is taught. Simply place it under the sensor, and it will be automatically detected. The edges of the crates can then be manually adjusted by setting two points.
2.2. Pick
After teaching the workpiece and KLT, the gripper is integrated into the software using an STL file. Here, you can adjust both the position of the gripper on the robot flange (offset) and the TCP.
Finally, the gripper positions are taught. This involves inserting a coordinate system into the workpiece that the gripper TCP will approach.
3. Communication/control
Communication is conducted through strings exchanged via a socket connection. Generally, these strings can be freely configured within the camera software. In this scenario, pre-configured TPR commands are utilized for communication.
To trigger the camera, the string "T1;" is sent via a PrintWriter. It is important to include the delimiter (semicolon) at the end of each string.
//Capture a new image
schreibeNachricht ("T1;");
function schreibeNachricht(nachricht) {
var printWriter =
new java.io.PrintWriter(
new java.io.OutputStreamWriter(
socket.getOutputStream()));
printWriter.print(nachricht);
printWriter.flush();
}
Subsequently, the camera's response is read from the socket using a DataInputStream. The socket is continuously read until the specified delimiter (Carriage Return) is recognized (refer to Sockets for more information).
//Delimiter
var delimiter = "\r".charCodeAt(0); //Carriage Return (CR) is a special character used to indicate the end of a line of text. It may need adjustment based on specific requirements.
//Socket IP & Port
var socket = new java.net.Socket("192.168.0.11", 8500); //Adjust the IP address of the camera and the port as needed.
//Reading from the socket
leseNachricht(socket, delimiter);
function leseNachricht(socket, msgSeperator)
{
var ByteHelper = Java.type("de.fruitcore.robot.core.util.ByteHelper");
var ByteArray = Java.type("byte[]");
var dIn = new java.io.DataInputStream(socket.getInputStream());
var full_msg = new ByteHelper();
var buffer = new ByteArray(1);
while(1){
dIn.readFully(buffer, 0, buffer.length);
full_msg.putByte(buffer[0]);
if(msgSperator == buffer[0]){
break;
}
}
var nachricht = new java.lang.String(full_msg.toBytes(), java.nio.charset.StandardCharsets.UTF_8);
return nachricht;
}
After triggering the camera, the socket must be read once more to account for the camera sending back the trigger command ("T1").
Following this, the camera sends a string containing the number of detected objects. If no objects are detected, the string "00\r" is sent. This information can be used in the program control to capture a new image.
while (true) {
if (leseNachricht(socket, msgSeperator) === "00\r") { //kein Objekt gefunden (ggf Trennzeichen anpassen
triggerCamera("T1;"); //neues Bild aufnehmen
sleep(100);
} else {
//Objekt gefunden, abholen
}
}
When an object is detected, the controller calculates the corresponding approach and pick positions. These positions can be requested/read using the TPR commands as follows:
- Send: "TPR,nnn,a,b,cDelimiter" ? nnn: Tool No., a: Result (0: Approach Position, 1: Grip Position, 2: Place Position), b: Pick No., c: Data Output Format (f�r Horst immer 0), Delimiter: Trennzeichen (Carriage Return \r)
- Receive: "TPR, m,n,h,xxx,yyy,zzz,ppp,qqq,rrrDelimiter" ? m: detected Model No., n: Grip Label No., h: Hand Model No., xxx,yyy,zzz,ppp,qqq,rrr: Objektkoordinaten (kartesische Koordingaten + Eulerwinkel), Delimiter: Trennzeichen (;)
A more detailed documentation of the TPR commands can be found in the appendix (TPR-Commands.pdf).
//Requesting the pick position
schreibeNachricht("TPR,101,0,0;");
//Retrieve the pick position
cam_resultApp = leseNachricht(socket, msgSeperator);
//Splitting the string into parts
splittedApp = cam_resultApp.split(",");
xApp = splittedApp[4] / 1000;
yApp = splittedApp[5] / 1000;
zApp = splittedApp[6] / 1000;
RxApp = splittedApp[7];
RyApp = splittedApp[8];
RzApp = splittedApp[9];
//Approachpos anfordern
schreibeNachricht("TPR,101,1,0;");
//Approachposition auslesen
cam_resultGrip = leseNachricht(socket, msgSeperator);
//String aufteilen
splittedGrip = cam_resultGrip.split(",");
xGrip = splittedApp[4] / 1000;
yGrip = splittedApp[5] / 1000;
zGrip = splittedApp[6] / 1000;
RxGrip = splittedGrip[7];
RyGrip = splittedGrip[8];
RzGrip = splittedGrip[9];
The provided coordinates can then be approached, for example, using an advanced move command (see Textual Programming).
// Approachposition
move({
'Coord': 'CARTESIAN_BASIS',
'MoveType': 'JOINT',
'PoseRelation': 'ABSOLUTE',
'anyconfiguration': false,
'blendradius.orient': 180.0,
'blendradius.xyz': 0.03,
'speed.ratio': 1.0,
'targetpose.x': xApp,
'targetpose.y': yApp,
'targetpose.z': zApp,
'targetpose.rx': RxApp,
'targetpose.ry': RyApp,
'targetpose.rz': RzApp,
}, "Approachposition");
// Gripposition
move({
'Coord': 'CARTESIAN_BASIS',
'MoveType': 'LINEAR',
'PoseRelation': 'ABSOLUTE',
'anyconfiguration': false,
'speed.ratio': 0.7,
'targetpose.x': xGrip,
'targetpose.y': yGrip,
'targetpose.z': zGrip,
'targetpose.rx': RxGrip,
'targetpose.ry': RyGrip,
'targetpose.rz': RzGrip,
}, "Gripposition");
The complete sample program can also be downloaded in the attachment (Keyence_Horst_Template.js)
4. Appendix
horstFX sample program Keyence