CRn-500 Series Network Vision Sensor Instruction Manual ENG

CRn-500 Series Network Vision Sensor Instruction Manual ENG

Mitsubishi Industrial Robot

CRn-500 Series

Network Vision Sensor Instruction Manual

3D-51C-WINE

4D-2CG5100-PKG-E

4D-2CG5400-PKG-E

4D-2CG5401-PKG-E

4D-2CG5403-PKG-E

4D-2CG5400C-PKG-E

4D-2CG5400R-PKG-E

BFP-A8520-A

Revision History

Printing Date Instruction Manual No.

2009-09-18 BFP-A8520

2009-10-08 BFP-A8520-A

Revision Contents

Correspond to MELFA-Vision Ver.1.1.1 and 1.2.

The lineup added. (High performance / high speed / Micro)

Preface

Thank you for purchasing this network vision sensor for CRn-500 series Mitsubishi Electric industrial robots.

The network vision sensor is an option that is used in combination with a CRn-500 series controller to make it possible to detect and inspect work through visual recognition. Before using this sensor, please read this manual well so that you utilize the contents of this manual when using this network vision sensor.

This manual attempts to cover special handling as well. Please interpret the absence of an operation from this manual as meaning that it can not be done.

The contents of this manual target the following software versions.

Robot controller : Ver. K6 or later

Symbols & Notation Method in This Manual

DANGER

This indicates a situation in which a mistake in handling will expose the

WARNING

user to the danger of death or severe injury.

This indicates a situation in which a mistake in handling has the possibility of resulting in death or severe injury of the user.

CAUTION

This indicates a situation in which a mistake in handling has the danger of causing injury to the user. Equipment damage is also possible.

No part of this manual may be reproduced by any means or in any form,without prior consent from

Mitsubishi.

The details of this manual are subject to change without notice.

An effort has been made to make full descriptions in this manual.However,if any discrepancies or unclear points are found,please contact your dealer.

The information contained in this document has been written to be accurate as much as possible.Please interpret that items not described in this document “cannot be perfomed.”.

Please contact your nearest dealer if you find any doubtful,wrong or skipped point.

Microsoft, Windows, and .NET Framework are registered trademarks of the Microsoft Corporation of the United States in the United States and/or other countries.

In-Sight is registered trademark of the Cognex Corporation.

Adobe,the Adobe logo,Acrobat,and the Acrobat logo are trademarks of Adobe Systems incorporated.

Reference to registered trademarks and trademarks are omitted in this manual.

Copyright(C) 2006-2009 MITSUBISHI ELECTRIC CORPORATION ALL RIGHTS RESERVED

CAUTION

Safety Precautions

Always read the following precautions and the separate

"Safety Manual" before starting use of the robot to learn the required measures to be taken.

CAUTION

CAUTION

CAUTION

CAUTION

All teaching work must be carried out by an operator who has received special training. (This also applies to maintenance work with the power source turned ON.)

Enforcement of safety training

For teaching work, prepare a work plan related to the methods and procedures of operating the robot, and to the measures to be taken when an error occurs or when restarting. Carry out work following this plan. (This also applies to maintenance work with the power source turned ON.)

Preparation of work plan

WARNING Prepare a device that allows operation to be stopped immediately during teaching work. (This also applies to maintenance work with the power source turned ON.)

CAUTION

Setting of emergency stop switch

During teaching work, place a sign indicating that teaching work is in progress on the start switch, etc. (This also applies to maintenance work with the power source turned ON.)

Indication of teaching work in progress

WARNING

Provide a fence or enclosure during operation to prevent contact of the operator and robot.

Installation of safety fence

Establish a set signaling method to the related operators for starting work, and follow this method.

Signaling of operation start

As a principle turn the power OFF during maintenance work. Place a sign indicating that maintenance work is in progress on the start switch, etc.

Indication of maintenance work in progress

Before starting work, inspect the robot, emergency stop switch and other related devices, etc., and confirm that there are no errors.

Inspection before starting work

The points of the precautions given in the separate "Safety Manual" are given below.

Refer to the actual "Safety Manual" for details.

WARNING

When automatically operating the robot with multiple control devices (GOT, PLC, pushbutton switch), the interlocks, such as each device's operation rights must be designed by the user.

CAUTION

Use the robot within the environment given in the specifications. Failure to do so could lead to a drop or reliability or faults. (Temperature, humidity, atmosphere, noise environment, etc.)

CAUTION

Transport the robot with the designated transportation posture. Transporting the robot in a

CAUTION non-designated posture could lead to personal injuries or faults from dropping.

Always use the robot installed on a secure table. Use in an instable posture could lead to

CAUTION positional deviation and vibration.

Wire the cable as far away from noise sources as possible. If placed near a noise source, positional deviation or malfunction could occur.

Do not apply excessive force on the connector or excessively bend the cable. Failure to observe

CAUTION this could lead to contact defects or wire breakage.

Make sure that the workpiece weight, including the hand, does not exceed the rated load or

CAUTION tolerable torque. Exceeding these values could lead to alarms or faults.

WARNING

Securely install the hand and tool, and securely grasp the workpiece. Failure to observe this could lead to personal injuries or damage if the object comes off or flies off during operation.

Securely ground the robot and controller. Failure to observe this could lead to malfunctioning by

WARNING noise or to electric shock accidents.

Indicate the operation state during robot operation. Failure to indicate the state could lead to

CAUTION operators approaching the robot or to incorrect operation.

WARNING

When carrying out teaching work in the robot's movement range, always secure the priority right for the robot control. Failure to observe this could lead to personal injuries or damage if the robot

CAUTION

CAUTION

CAUTION

CAUTION is started with external commands.

Keep the jog speed as low as possible, and always watch the robot. Failure to do so could lead to interference with the workpiece or peripheral devices.

After editing the program, always confirm the operation with step operation before starting automatic operation. Failure to do so could lead to interference with peripheral devices because of programming mistakes, etc.

Make sure that if the safety fence entrance door is opened during automatic operation, the door is locked or that the robot will automatically stop. Failure to do so could lead to personal injuries.

Never carry out modifications based on personal judgments, or use non-designated maintenance parts.

Failure to observe this could lead to faults or failures.

WARNING

When the robot arm has to be moved by hand from an external area, do not place hands or fingers in the openings. Failure to observe this could lead to hands or fingers catching depending on the posture.

CAUTION

CAUTION

Do not stop the robot or apply emergency stop by turning the robot controller's main power OFF. If the robot controller main power is turned OFF during automatic operation, the robot accuracy could be adversely affected.Moreover, it may interfere with the peripheral device by drop or move by inertia of the arm.

Do not turn off the main power to the robot controller while rewriting the internal information of the robot controller such as the program or parameters.

If the main power to the robot controller is turned off while in automatic operation or rewriting the program or parameters, the internal information of the robot controller may be damaged.

Precautions for the basic configuration are shown below.(When CR1-571/CR1B-571 is used for the controller.)

CAUTION

Provide an earth leakage breaker that packed together on the primary power supply of the controller as protection against electric leakage. Confirm the setting connector of the input power supply voltage of the controller, if the type which more than one power supply voltage can be used. Then connect the power supply.

Failure to do so could lead to electric shock accidents.

Power supply *RV-1A/2AJ series and RP-1AH/3AH/5AH series: Single phase 90-132VAC, 180-253VAC.

*Except the above: Single phase 180-253VAC.

Rear side of controller

Earth leakage breaker

(NV)

Cover

Terminal

Cover

Terminal cover

Protective earth terminal

(PE)

For using RH-5AH/10AH/15AH series or RH-6SH/12SH/18SH series.

WARNING

While pressing the brake releasing switch on the robot arm, beware of the arm which may drop with its own weight.

Dropping of the hand could lead to a collision with the peripheral equipment or catch the hands or fingers.

Contents

1.

SUMMARY ................................................................................................................................................... 1-1

1.1.

What A Network Vision Sensor Is

.................................................................................................... 1-1

1.2.

Features

............................................................................................................................................ 1-2

1.3.

Applications

...................................................................................................................................... 1-3

1.4.

Explanation of terms

........................................................................................................................ 1-5

2.

SYSTEM CONFIGURATION........................................................................................................................ 2-6

2.1.

Component Devices

......................................................................................................................... 2-6

2.1.1.

Network vision sensor basic set composition and accessories

..................................................... 2-6

2.1.2.

Equipment provided by customer

................................................................................................ 2-10

2.2.

System configuration example

...................................................................................................... 2-11

2.2.1.

Configuration with one robot controller (for CR1 controller) and one vision sensor

.................... 2-11

2.2.2.

Configuration with one robot controller and two vision sensors

.................................................. 2-12

2.2.3.

Configuration with three robot controllers and one vision sensor

................................................ 2-13

3.

SPECIFICATIONS...................................................................................................................................... 3-14

3.1.

Network vision sensor specifications

........................................................................................... 3-14

3.1.1.

External Dimensions of Network Vision Sensor(5100/5400/5401/5403/5400C) ..................... 3-15

3.1.2.

External Dimensions of Network Vision Sensor 5400R........................................................... 3-16

3.2.

Robot controller specifications

..................................................................................................... 3-18

3.3.

MELFA-Vision

................................................................................................................................. 3-19

3.3.1.

Features

....................................................................................................................................... 3-19

3.3.2.

Operating Environment

................................................................................................................ 3-20

4.

WORK CHARTS ........................................................................................................................................ 4-21

4.1.

Work procedure chart

.................................................................................................................... 4-21

5.

EQUIPMENT PREPARATION AND CONNECTION ................................................................................. 5-22

5.1.

Equipment preparation

................................................................................................................... 5-22

5.2.

Equipment connection

................................................................................................................... 5-23

5.2.1.

Expansion option box installation (for CR1 controller)

................................................................. 5-23

5.2.2.

Mounting the Ethernet interface card (card name: HR533)

......................................................... 5-23

5.2.3.

Individual equipment connections

................................................................................................ 5-23

5.3.

Software installation

....................................................................................................................... 5-25

5.3.1.

Vision sensor dedicated software (In-Sight Explorer

before Ver.4.1) installation ........................ 5-25

5.3.2.

Vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later) installation

....................... 5-26

5.3.3.

MELFA-Vision installation

............................................................................................................ 5-27

6.

VISION SENSOR SETTINGS .................................................................................................................... 6-29

6.1.

Vision Sensor Initial Settings

........................................................................................................ 6-29

6.2.

Vision Sensor Initial Settings (In-Sight Explorer Ver.4.1or later)

................................................ 6-32

6.3.

Work recognition test

..................................................................................................................... 6-34

6.3.1.

Starting MELFA-Vision (network vision sensor support software)

............................................... 6-34

6.3.2.

Image adjustment

......................................................................................................................... 6-39

6.3.3.

Image processing settings

........................................................................................................... 6-42

7.

ROBOT CONTROLLER SETTINGS.......................................................................................................... 7-57

7.1.

Robot Controller Parameter Settings

............................................................................................ 7-57

7.2.

Calibration Setting

.......................................................................................................................... 7-62

7.3.

Robot Program Writing

.................................................................................................................. 7-73

7.3.1.

Flow for starting of image processing by robot

............................................................................ 7-73

7.3.2.

Writing a Sample Robot Program

................................................................................................ 7-73

7.4.

Executing the automatic operation test

........................................................................................ 7-75

7.4.1.

Put the vision sensor online.

........................................................................................................ 7-75

7.4.2.

Test by executing each step........................................................................................................ 7-75

7.4.3.

Starting a Robot Program

............................................................................................................. 7-76

7.5.

When the robot can not grasp the work normally

........................................................................ 7-78

7.5.1.

Check the MELFA-Vision [Camera Image].

................................................................................. 7-78

7.5.2.

Comparison of the position data for the work recognized by the vision sensor and the position data received by the robot

...................................................................................................................................... 7-78

8.

MAINTENANCE ......................................................................................................................................... 8-80

8.1.

Vision Sensor Data Backup

........................................................................................................... 8-80

8.2.

Vision Sensor Data Restoration

.................................................................................................... 8-82

8.3.

Vision Sensor Cloning

................................................................................................................... 8-85

8.4.

Image Log Acquisition Settings and Reception Start/End

.......................................................... 8-87

8.5.

Vision Startup Settings

.................................................................................................................. 8-88

8.6.

User List Settings

........................................................................................................................... 8-89

9.

DETAILED EXPLANATION OF FUNCTIONS ........................................................................................... 9-91

9.1.

Vision Sensor Dedicated Commands and Status Variables

........................................................ 9-91

9.1.1.

How to Read Items

....................................................................................................................... 9-91

9.1.2.

MELFA-BASIC IV Commands

..................................................................................................... 9-91

9.1.3.

Robot status variables

................................................................................................................ 9-107

9.2.

MELFA-Vision Function Details

................................................................................................... 9-115

9.2.1.

MELFA-Vision Main Screen

....................................................................................................... 9-115

9.2.2.

Job Editing screen ([Image Log] tab)

..........................................................................................9-116

9.2.3.

Job edit screen ([Result Cell Position] tab)

.................................................................................9-117

9.2.4.

Vision sensor network settings

...................................................................................................9-118

9.3.

Vision program detailed explanation

.......................................................................................... 9-119

9.3.1.

Templates provided for MELFA-Vision

........................................................................................9-119

9.3.2.

Image processing - blobs

........................................................................................................... 9-121

9.3.3.

Image processing

– Color.......................................................................................................... 9-124

9.3.4.

Using image processing for which there is no template

............................................................ 9-132

9.3.5.

To shorten the time for transferring data with the robot controller

............................................. 9-133

9.4.

Detailed explanation of systems combining multiple vision sensors and robots

................... 9-136

9.4.1.

Systems with one robot controller and multiple vision sensors

................................................. 9-136

9.4.2.

Systems with one vision sensor and multiple robot controllers

................................................. 9-137

10.

TROUBLESHOOTING ....................................................................................................................... 10-139

10.1.

Error list ...................................................................................................................................... 10-139

11.

APPENDIX.......................................................................................................................................... 11-142

11.1.

Performance of this product (comparison with built-in type RZ511 vision sensor).............. 11-142

11.1.1.

Comparison of work recognition rate ....................................................................................... 11-142

11.1.2.

Comparison of image processing capacity .............................................................................. 11-142

11.1.3.

Factors affecting the processing time ...................................................................................... 11-143

11.2.

Calibration No. marking sheet ................................................................................................... 11-144

1 Summary

1. Summary

1.1.

What A Network Vision Sensor Is

The network vision sensor is an option that makes it possible to discriminate the position of various types of work and transport, process, assemble, inspect, and measure work with MELFA robots. It consists of

MELFA-Vision and the vision sensor, and the related options.

1-1 What A Network Vision Sensor Is

1 Summary

1.2.

Features

The network vision sensor has the following functions.

(1) Position detection through high-speed image processing

High-speed image processing makes it possible to detect the work at high speed, not only when the angle is not detected, but even when the work includes 360˚ rotation.

: about 50 ms

When detecting 360˚ : about 150 ms

* Measurement conditions Search area: 640x480, Pattern: 90x90

This is the pattern matching processing time using In-Sight5400 (camera exposure time of 8 ms)

* These values are reference values. These values are not guaranteed.

(2) Ethernet communication

・ Since the system can be configured with an Ethernet network, a wide variety of system configurations can be realized.

Up to seven vision sensors can be controlled with one robot controller.

② Up to three robot controllers can share control of one vision sensor.

Systems can be configured with multiple robot controllers and multiple vision sensors.

④ Both robot controller and vision sensor can be debugged using one PC.

"MELFA-Vision Network Vision Support Software" has image log functions, so it is possible to check the image state when an error occurred.

(3) Easy setting

Connect a vision sensor by just connecting the Ethernet cable and the power supply cable. At the robot controller, just connect the Ethernet cable to the Ethernet interface card.

・ The vision sensor and robot controller settings can be made simply with MELFA-Vision.

(4) Easy robot program calibration

The program can be made easily with MELFA-BASIC IV commands available for vision sensor exclusively.

This system is equipped with a simple calibration function that can handle a variety of camera installation positions.

(5) Space saving, wiring saving

Since the vision sensor combines the camera and controller in one piece, the only wiring needed is the Ethernet cable and power supply cable, so wiring does not take up space.

(6) Easy maintenance

It is possible to store recognized images on a PC with MELFA-Vision running, to check the image when an error occurred, and to find the cause of the error easily.

Features 1-2

1 Summary

1.3.

Applications

Here are major applications of the network vision sensor.

(1) Loading/Unloading Machined Parts

Figure 1-1 Example of Loading/Unloading Machined Parts

(2) Processed Food Pallet Transfer

Figure 1-2 Example of Processed Food Pallet Transfer

(3) Lining Up and Palletizing Electronic Parts

Figure 1-3 Example of Lining Up and Palletizing Electronic Parts

1-3 Applications

(4) Small Electrical Product Assembly

(5) Lining Up Parts

Figure 1-4 Example of Small Electrical Product Assembly

1 Summary

Figure 1-5 Example of Lining Up Parts

(6) Small Electronic Parts Mounting

Figure 1-6 Example of Small Electronic Parts Mounting

Applications 1-4

1 Summary

1.4.

Explanation of terms

This section explains the terms used in this manual.

CCD (Charge Coupled Device) · This is the most general pickup element used in cameras.

Degree of matching (score) ······· This value indicates the degree to which the image matches the registered pattern. This value ranges from 0 to 100. The closer to 100, the higher the degree of matching.

Offline········································· This is a vision sensor mode for such work as setting the vision sensor operating environment, setting the image processing, and backing up data to a

PC.

Online ········································ This is the vision sensor mode in which the vision sensor executes image processing under command from the robot controller.

Picture element (pixel)··············· This is the smallest unit of data making up the image. One image comprises

640x480 pixels. Depending on the type of vision sensor, one image comprises

1024x768 or 1600x1200 pixels.

Contrast ····································· This is a yardstick expressing the "brightness" of a pixel in units from 0-255. The smaller this value, the darker the pixel; the higher this value, the brighter the pixel.

Calibration·································· This is coordinate conversion for converting from the image processing coordinate system to the robot coordinate system.

Threshold··································· This is the cutoff point for degree of matching scores.

Shutter speed ···························· This is the exposure time (the time during which the CCD accumulates charge).

Sort ············································ This rearranges the order in which data (recognition results) is output to the robot according to the specified item.

Trigger········································ This is the starting signal for starting the exposure (image capture).

Pattern matching························ This is processing for detecting the pattern that matches the pattern registered from the captured image.

Vision program (job) ·················· This is the program that executes such image processing as pattern matching, blobbing, etc. The image processing can be set freely.

Filter··········································· This is a form of image processing for improving the picture quality.

Blob············································ This is a type of image processing for detecting blobs with features in the image captured. Bright sections are expressed as white; dark sections are expressed as black.

Host name ································· This is the network vision sensor name. This is registered in the initial settings.

Live ············································ Images can be displayed in real time by shooting continuously.

Area ··········································· This is the processing area for executing image processing.

Log function ······························· This function stores the image taken in with online operation and the execution results (log).

Exposure···································· This is the accumulation of charge on the CCD. When light strikes the CCD, charge accumulates and the degree of this accumulation becomes the degree of brightness of the image.

1-5 Explanation of terms

2 System configuration

2. System configuration

2.1.

Component Devices

2.1.1.

Network vision sensor basic set composition and accessories

The composition of the network vision sensor basic set that you have purchased is shown in "Table 2-1

List of Network Vision Sensor Basic Set Composition”.

Composition article name

Vision sensor

Vision sensor

5100

5400

Vision sensor

5401

Vision sensor

5403

Vision sensor

5400C

Vision sensor

5400R

Table 2-1 List of Network Vision Sensor Basic Set Composition

Network vision sensor set

Type

4D-2CG

5100

-PKG

4D-2CG

5400

-PKG

4D-2CG

5401

-PKG

4D-2CG

5403

-PKG

4D-2CG

5400C

-PKG

4D-2CG

5400R

-PKG

MELFA-Vision

3D-51C

-WINE

Prepare by the customer

(*5)

(*6)

Lens coverー

Thread guard

Breakout cable (5m) (*1)

Network cable (5m) (*1)

Camera cable (5m) (*4) remote head camera installation bracket

(*4)

In-Sight5000 series installation guide

CD-ROM :Part# 206-6364-*** (*2)

・In-Sight Explorer

・In-Sight Display Control

・In-Sight OPC Server Software

Document (Help/Installation Manuals)

MELFA-Vision :

CD-ROM :3D-51C-WINE (*3)

・MELFA-Vision

(Network vision support software)

・Instruction manual .

BFP-A8779

Prepare by the customer

(*5)

(*1) The cable length can be changed. For details, see "Table 2-2 List of Network Vision Sensor

Related Options".

(*2) This is a CD-ROM that comes with a vision sensor made by the Cognex Corporation.

This CD-ROM contains the software and operations manual required for using the network vision sensor.

The *** in the model name part number is the version number.

(*3) MELFA-Vision and the instruction manual are included.

(*4) The camera cable which connects the remote head camera and the vision sensor,and the remote head camera installation bracket is bundled for Network Vision sensor 5400R.

(*5) These specifications apply when the vision sensor and related options are prepared by the user and only MELFA-Vision (network vision sensor support software and instruction manual) is

provided. The applicable vision sensors (COGNEX brand) are listed in Table 3-4 THE

CORRESPONDENCE TYPE AND VERSION OF MELFA-VISION

for reference.

(*6) Note: The vision sensor must be equipped with an image processing algorithm (PatMax).

Component Devices 2-6

2 System configuration

Table 2-2 List of Network Vision Sensor Related Options

Option name Model

Ethernet interface card

Expansion option box

2A-HR533

CR1-EB3

Network cable

Breakout cable

0.6m CCB-84901-1001-00

2m CCB-84901-1002-02

5m CCB-84901-1003-05

10m CCB-84901-1004-10

15m CCB-84901-1005-15

30m CCB-84901-1006-30

2m CCB-84901-0101-02

5m CCB-84901-0102-05

10m CCB-84901-0103-10

Camera cable

15m CCB-84901-0104-15

5m CCB-84901-0303-05

10m CCB-84901-0304-10

I/O Module

15m CCB-84901-0305-15

Terminal block conversion module CIO-1350

I/O Expansion module

(8 inputs/8 outputs)

CIO-1450

Diffused ring light(red)

Direct ring light(red)

Direct ring light(white)

Network vision sensor instruction manual (booklet)

IFS-DRL-050

IFS-RRL050

IFS-WRL050

BFP-A8779

2-7 Component Devices

The composition of the basic set(All-in-one design) are shown in figures.

Lens cover

O ring

Thread guard

In-sight vision sensor

Breakout cable

Network cable

2 System configuration

In-Sight Software CD-ROM

Installation guide

Figure 2-1 Basic set(All-in-one design) composition

MELFA-Vision CD-ROM

Component Devices 2-8

2 System configuration

The composition of 4D-2CG-5400R-PKG(Remote Head) are shown in figures.

Remote Head Camera

Camera Cable

Network Vision sensor

Breakout Cable

Network Cable

In-Sight Software CD-ROM

MELFA-Vision CD-ROM

Installation guide

Figure 2-2 Basic set(Remote Head Type) composition

2-9 Component Devices

2 System configuration

2.1.2.

Equipment provided by customer

In addition to this product, the system also includes equipment provided by the customer. “Table 2-3 List of

Equipment Provided by Customer" shows the minimum necessary equipment. The equipment for the

customer to provide depends on the system. For details, see “2.2 System configuration example”.

Device name

Table 2-3 List of Equipment Provided by Customer

Recommended product

Vision sensor

Breakout cable

In-Sight5000 series (Refer Table 2-1) (*1)

(Refer Table 2-1, Table 2-2) (*1)

Network cable

Camera lens

24V power supply

PC

Hub

Ethernet straight cable

(Refer Table 2-1, Table 2-2) (*1)

C mount lens (CS mount lens is possible for 5400R.)

24 VDC (±10%)

(5100/5400/5400C/5401 are 350mA or larger, 5403 is 500mA or larger,

5400R is 250mA or larger.)

CPU Intel® Pentium® III 700MHz (or equivalent) or faster

Memory size

Hard disk

OS

Display

256 MB min.

Available capacity of 200 MB min.

Microsoft® Windows® 2000, Service Pack 4

Microsoft® Windows® XP Professional,

Service Pack 2

An SVGA (800x600) or higher resolution display with graphic functions that can display at least 16 colors

Disc device

Keyboard

CD-ROM drive

PC/AT compatible keyboard

Pointing device Device that operates in Windows® operating system

Communications Must have Ethernet line that operates in Windows® operating system

A switching hub is recommended.

Any straight Ethernet cable is OK.

Lighting device Select the optimum lighting for the work to be recognized.

LED lights are recommended for their long service life.

(*1) It is attached to the network vision sensor set

Component Devices 2-10

2 System configuration

2.2.

System configuration example

2.2.1.

Configuration with one robot controller (for CR1 controller) and one vision sensor

Below is shown the entire configuration (robot system) when one camera is used.

In-Sight5100/5400/5401/5403/5400C/5400R

24V power supply

Hub

ハブ

PC Tool

Ethernet card (10BASE-T)

Robot

Robot controller

Extended box

Figure 2-3 Configuration (Robot System) When One Camera Is Used

Below is a list of the equipment configuration when one camera is used.

Table 2-4 List of Configuration When One Camera Is Used

Part name Format Manufacturer Q'ty Remarks

Robot controller

Robot main unit

Expansion option box

Ethernet interface card

MELFA-Vision

Vision sensor

PC

Hub

Lighting device

Vision sensor

Ethernet cable (straight)

CRn-500 series

All models

CR1-EB3

2A-HR533

3D-51C-WINE

In-Sight 5000 series

Breakout cable -

Network cable

Lens

24V power supply

C mount lens(*1)

(*3)

Mitsubishi

Electric

COGNEX

1 Software: K6 or later

1

(1) For CR1 controller

1

1

(*4)

1

Software: 3.20 or later(*4)

1

(*4)

1

(*4)

1 Provided by customer (*2)

1

1

1

2

1

(*1) Select from general C mount lenses.

(*2) The half tone (gray) section is the equipment provided by the customer.

(*3) For the 24 VDC (±10%) power supply, the vision sensor requires a minimum of 350 mA(5403:a minimum of

500mA / 5400R: a minimum of 250mA).

(*4) It is attached to the network vision sensor set

2-11 System configuration example

2 System configuration

2.2.2.

Configuration with one robot controller and two vision sensors

Below is shown the entire configuration (robot system) when two cameras are used.

Robot

In-Sight 5100/5400/5401/5403/5400C/5400R

24V power supply

Robot controller

Hub

ハブ

Extended box

PC Tool

Ethernet card

(10BASE-T)

Figure 2-4 Configuration (Robot System) When Two Cameras Are Used

Below is a list of the equipment configuration when two cameras are used.

Table 2-5 List of Configuration When Two Cameras Are Used

Part name Format Manufacturer Q'ty Remarks

Robot controller

Robot main unit

Expansion option box

Ethernet interface card

MELFA-Vision

Vision sensor

(*4)

CRn-500 series

All models

CR1-EB3

2A-HR533

3D-51C-WINE

Vision sensor In-Sight 5000 series

Breakout cable -

Network cable -

Lens (*4) C mount lens(*1)

24V power supply

(*3)

Mitsubishi

Electric

COGNEX

1 Software: K6 or later

1

(1) For CR1 controller

1

1

(*5)

2

Software: 3.20 or later(*5)

2

(*5)

2

(*5)

2

1

Provided by customer (*2)

PC

Hub

Ethernet cable (straight)

Lighting device

1

1

2

1

(*1) Select from general C mount lenses.

(*2) The half tone (gray) section is the equipment provided by the customer.

(*3) For the 24 VDC (±10%) power supply, the vision sensor requires a minimum of 350 mA(5403:a minimum of

500mA / 5400R: a minimum of 250mA).

(*4) Up to seven vision sensors can be connected at the same time to one robot controller, so prepare the

"necessary quantity" for the number of vision sensors you use.

(*5) It is attached to the network vision sensor set

System configuration example 2-12

2 System configuration

2.2.3.

Configuration with three robot controllers and one vision sensor

Below is shown the entire configuration (robot system) when one camera is used with three robots.

In-Sight 5100/5400/5401/5403/5400C/5400R

24V power supply

Hub

Robot controller

ハブ

Extended box

PC Tool

Robot

Figure 2-5 Configuration (Robot System) When One Camera Is Used with Three Robots

Below is a list of the equipment configuration when one camera is used with three robots.

Table 2-6 List of Configuration When One Camera Is Used with Three Robots

Part name Format Manufacturer Q'ty Remarks

Robot controller (*4)

Robot main unit

Expansion option box

Ethernet interface card

MELFA-Vision

Vision sensor

Lens

CRn-500 series

All models

CR1-EB3

2A-HR533

3D-51C-WINE

Vision sensor In-Sight 5000 series

Breakout cable

Network cable

C mount lens(*1)

Mitsubishi

Electric

COGNEX

1 Software: K6 or later

3

(3) For CR1 controller

3

1

(*5)

1

Software: 3.20 or later (*5)

1

(*5)

1

(*5)

1 Provided by customer (*2)

24V power supply

(*3)

1

PC

Hub

Ethernet cable (straight)

1

1

4

Lighting device

- -

1

(*1) Select from general C mount lenses.

(*2) The half tone (gray) section is the equipment provided by the customer.

(*3) For the 24 VDC (±10%) power supply, the vision sensor requires a minimum of 350 mA(5403:a minimum of 500mA / 5400R: a minimum of 250mA).

(*4) Up to three robot controllers can be connected at the same time to one vision sensor, so prepare the

"necessary quantity" for the number of robot controllers you use.

(*5) It is attached to the network vision sensor set

2-13 System configuration example

3 Specifications

3. Specifications

3.1.

Network vision sensor specifications

Here are the specifications of the network vision sensor by itself.

Magnificati on ratio

Memory

Table 3-1 Network Vision Sensor Stand-Alone Specifications

Standard

5100

High-Perfor mance

5400

Color

5400C (*1)

High-resolution

5401 (*1) 5403 (*1)

Average performance with standard edition as 1 (*2) x1 x2.5 x2

Vision program storage area :32MB

Image processing area :64MB

Ver. 3.2 or later

Firmware Version

Camera

CCD sensor size

Color

×

1/3 inch

1/1.8 inch

×

Remote Head

5400R (*1)

x2.5

1/3 inch

Image capture speed

(frames/sec.) (*3)

256

Display option

I/O option

(*6)

Weight[g]

(lens cover mounted, no lens)

VGA board

PC

Trigger/high-speed output count

I/O breakout expansion module

Ethernet I/O support

(512 input max./512 output)

Ethernet

297.6 294.8

×

/2(*5)

Interface

(*6)

Lighting

Application development

Integrated lighting option

Controller pad/VGA

In-Sight Explorer/PC

(Communication lines: 3 lines)

×

×

Lens mounting

Power supply

C or CS

Voltage condition

The maximum current

Image processing

Ambient temperature

(operation / storage)

Environme ntal

C C/CS

24VDC±10%

350mA 500mA 250mA

Pattern matching / Blob / Edge/Bar code 2D codes / Text comparison / Histogram / Color

0 - 45℃/-30 - 80℃ (*7)

Ambient humidity

Protection

90℃ (no condensation allowed)

IP67 (When lens cover installed)

Impact[G] 80 (IEC68-2-27)

Vibration[G] 10 (10 - 500Hz IEC68-2-6)

Certificatio n

CE、FCC、UL、CUL

(*1) High-resolution edition,Color edition and Remote-Head edition correspond from Ver.1.1 of

MELFA-Vision.

(*2) The performance values do not include the image capture speed.

(*3) The image capture speeds are the values with an exposure time of 8 ms and full image frame capture.

(*4) A lens cover (that comes with this sensor) is required that was designed to meet the NEMA standard protection specifications.

(*5) One high-speed output is for strobe.

(*6) I/O and Ethernet cable The maximum curve radius is 38 mm.

(*7) The maximum operating temperature of the remote head is possible for 5400R up to 50℃.

Network vision sensor specifications 3-14

3 Specifications

3.1.1.

External Dimensions of Network Vision Sensor(5100/5400/5401/5403/5400C)

Externals dimensions of Network Vision Sensor(5100/5400/5401/5403/5400C) is shown below. please refer when you fix the Vision sensor.

Figure 3-1 External Charts of Network Vision Sensor(5100/5400/5401/5403/5400C)

3-15 Network vision sensor specifications

3 Specifications

3.1.2.

External Dimensions of Network Vision Sensor 5400R

Externals dimensions of Network Vision Sensor 5400R is shown below. please refer when you fix the Vision sensor.

Unit:mm

Figure 3-2 External Charts of Network Vision Sensor 5400R (Processor part)

Network vision sensor specifications 3-16

3 Specifications

Unit:mm

Figure 3-3 External Charts of Network Vision Sensor 5400R (Remote Head part)

Unit:mm

Figure 3-4 External Charts of Network Vision Sensor 5400R (Bracket part)

3-17 Network vision sensor specifications

3.2.

Robot controller specifications

The robot controller specifications related to the network vision sensor is shown below.

Item

Table 3-2 Robot Controller Specifications

Specifications

3 Specifications

RT Toolbox: Ver.F3 or later are recommended. (*2)

Applicable robot controllers All CRn-500 series controllers (*3)

Connectable robots

Options (*5)

All robots (*4)

Ethernet interface card (2A-HR533) required

Number of sensors and robots connectable

Number of cameras per robot controller

Number of robot controllers that can be connected per vision sensor

Robot program language MELFA-BASIC IV with special vision sensor commands

: 7 maximum

: 3 maximum

(*1) Versions K5 and earlier can communicate with a vision sensor by combining the previous "Open/Print/Input/Close" commands.

Versions K7 and later can communicate with a vision sensor to high-speed by improved command.

(*2) Versions F2 and earlier do not support MELFA-BASIC IV special commands for vision sensors, so errors occur in syntax checks, so use these versions without syntax checking.

(*3) Ethernet functions must be mounted.

(*4) Be careful. The robot types are restricted if tracking functions are used.

(*5) For a CR1/CR1B controller, the expansion option box is required. For details on the expansion option box specifications and installation method, see the BFP-A8054 "CR1/CR1B Controller Instruction Manual" "From

Controller Setup and Basic Operations to Maintenance" "3. Optional Equipment Installation".

Mount the Ethernet interface card (2A-HR533) in option slot 1.

Robot controller specifications 3-18

3 Specifications

3.3.

MELFA-Vision

3.3.1.

Features

MELFA-Vision is software that provides support for those using vision sensors for the first time and support for connections between robot controllers and vision sensors. Below are the basic functions and features of

MELFA-Vision.

1

2

3

4

5

6

7

8

9

Function

Logon and logoff

Image operations

Capture request

Table 3-3 MELFA-Vision Basic Functions and Features

Features

You log on to specify the vision sensors on the network and control them. Also, you log off to end control.

Images captured with the vision sensor are operated on as follows.

This manually requests the vision sensor to capture an image and requests live (real time) capturing.

This changes the display magnification ratio for images captured with the vision sensor.

Camera image adjustment

Online and offline

Vision program writing

Recognition result display

Robot controller communication settings

Robot and vision sensor calibration

Image Log

File transfer

Backup

Restore

Cloning

When a robot controller controls vision sensors, it is put online (making it controllable from the outside); when making such settings as vision program writing, changing, and deleting, it is put offline.

This registers frequently used image processing (pattern matching, blob and color) as templates. Each of these image processing types easily using the setting screen with easy-to-understand work procedure.

It is also possible to edit, delete, and change the name of written vision programs.

The vision sensor image processing results can be displayed and the recognized quantity and recognized work position are checked.

It is easy to make settings for communicating between the robot controller and vision sensor.

The position of work recognized by a vision sensor can be converted the robot coordinate system. In this way, work positions received from the vision sensor become positions at which the robot directly holds the work.

Images recognized by a vision sensor can be stored on a PC. This makes it possible to analyze later pictures of work that could not be recognized and aids in finding the cause.

Files can be transferred between a vision sensor and a PC.

All data set on a vision sensor can be stored on a PC.

Backup data stored on a PC can be returned to a vision sensor.

It is possible to set multiple vision sensors with the same settings as one vision sensor.

3-19 MELFA-Vision

3 Specifications

Version correspondence with the vision sensor by COGNEX and MELFA-Vision is shown in the following.

●:

Indicates supported model name of each version

Table 3-4 THE CORRESPONDENCE TYPE AND VERSION OF MELFA-VISION

In-Sight model name

Specification Ver.1.0 Ver.1.1 Ver.1.1.1 Ver.2.0

5100

5101

5103

5100C

5400

5401

5403

5400C

5400R

Standard

Standard + High resolution1

(1,024x768)

Standard + High resolution2

(1,600x1,200)

Standard + color

High performance

High performance + High resolution1

(1,024x768)

High performance + High resolution2

(1,600x1,200)

High performance+ color

High performance+ Remote head

● 5400S

5403S

High performance+ Stainless steel body

High performance + High resolution2

+ Stainless steel body

(1,600x1,200)

5400CS High performance+ color

+ Stainless steel body

High speed 5600

5603

1100(Micro)

1400(Micro)

1403(Micro)

High speed + High resolution2

(1,600x1,200)

Micro standard

Micro high performance

Micro high performance + High resolution2

(1,600x1,200)

1100C(Micro) Micro standard + color

1400C(Micro) Micro high performance + color

1403C(Micro) Micro high performance + High resolution2

(1,600x1,200)+ color

3.3.2.

Operating

Below is the PC operating environment for MELFA-Vision.

Table 3-5 MELFA-Vision Operating Environment

Item Requirement

CPU

Main memory

Hard disk

OS

Display

Disc device

Intel® Pentium® III 700MHz (or equivalent) or faster

256 MB min.

Available capacity of 200 MB min.

Microsoft® Windows® 2000, Service Pack 4

Microsoft® Windows® XP Professional, Service Pack 2

An SVGA (800x600) or higher resolution display with graphic functions that can display at least 16 colors

CD-ROM drive

Pointing device

Communications

Device that operates in Windows® operating system

Must have Ethernet line that operates in Windows® operating system

MELFA-Vision 3-20

4 Work Charts

4. Work Charts

4.1.

Work procedure chart

This chapter explains the work procedure for building a vision system using our robots.

Check the following procedure before working.

Start of work

Step1

Equipment preparation and connection (Chapter 5)

Prepare and connect the required equipment and install the software (p.5-23)

Step2

Vision Sensor Initial Settings (Chapter 6)

Vision sensor default settings (p.6-29)

Step3

Work recognition test

Robot Controller Settings (Chapter 7)

(p.6-33)

Robot controller communication settings (p.7-54)

Calibration settings

(p.7-59)

Robot program writing (MELFA-BASIC IV)

(p.7-70)

Step4

Automatic operation test

Maintenance (Chapter 8)

Vision sensor data backup

(p.7-72)

(p.8-77)

E n d

4-21 Work procedure chart

5 Equipment preparation and connection

5. Equipment preparation and connection

This chapter explains how to prepare necessary equipment, connect it to the system, etc., using a system with one vision sensor and one robot controller as an example.

5.1.

Equipment preparation

The following equipment is required for building the vision system. Included is equipment that must be provided by the customer, so prepare what is necessary for your system.

Table 5-1 List of Configuration When One Camera Is Used

Part name Format Manufacturer Q'ty Remarks

Robot controller

Robot main unit

Teaching pendant

CRn-500 series

All models

R28TB

2A-HR533

Ethernetinterface card(*1)

Network vision sensor basic set

Lens

Vision sensor

Breakout cable

Network cable

In-Sight 5000 series

C mount lens

Mitsubishi

Electric

1

1

1

2

1 Software: K6 or later

1

1

1

1 Software: 3.20 or later

1

1

1

Provided by customer

24V power supply

PC

Hub

Ethernet cable (straight)

Lightning device

Robot hand

Hand interface card

2A-RZ365

Mitsubishi

Electric

1

1

1

Equipment arranged for as necessary

*1) When a CR1 or CR1B robot controller is used, the expansion option box is required.

Equipment preparation 5-22

5 Equipment preparation and connection

5.2.

Equipment connection

This section explains how to connect the equipment prepared.

5.2.1.

Expansion option box installation (for CR1 controller)

A CR1 expansion option box is required in order to connect the Ethernet card to the CR1 controller. If the controller is a CR1 or CR1B, first install the expansion option box.

For any other controller, it is not necessary to install an expansion option box.

For details on the expansion option box installation method, see the BFP-A8054 "CR1/CR1B Controller

Instruction Manual" "From Controller Setup and Basic Operations to Maintenance" "3. Optional Equipment

Installation".

5.2.2.

Mounting the Ethernet interface card (card name: HR533)

Install the Ethernet card in the robot controller. For details on robot controller handling, see the BFP-A8054

"CR1/CR1B Controller Instruction Manual" "From Controller Setup and Basic Operations to Maintenance" "3.

Optional Equipment Installation".

For details on any controller other than a CR1, see the "CRn Controller Instruction Manual" "From Controller

Setup and Basic Operations to Maintenance" "3. Optional Equipment Installation". (n = 2, 3, 4, 7, or 8)

5.2.3.

Individual connections

This section explains how to connect each piece of equipment.

For details on how to install the lens on the vision sensor main unit, how to install the breakout cable, and how to install the network cable, see the "In-Sight 5000 Series Installation Guide".

(1) Install the C mount lens on the vision sensor. The C mount lens focal distance depends on the distance between the lens and the work and the field of vision the customer requires for image processing.

(2) Connect the Ethernet cable to the connector (female) labeled "ENET".

(3) Connect the breakout cable to the connector (male) labeled "24VDC".

(4) Connect the other end of the 24V power supply with the "24VDC" (white/green) wire and the

"GND" (brown) wire.

(5) Connect the other end of the Ethernet cable to hub.

In-Sight5100/5400

24V power supply

Hub

ハ ブ

(6) Connect the Ethernet straight cable to the hub and the other end to the PC.

In-Sight5100/5400

24V power supply

Hub

ハ ブ

PC Tool

5-23 Equipment connection

5 Equipment preparation and connection

(7) Connect the Ethernet straight cable to the hub and the other end to the robot controller's

Ethernet interface card.

Ethernet Cable

Figure 5-1 Ethernet Cable Connection

In-Sight5100/5400

24V power supply

Hub

ハ ブ

PC Tool

Robot

Robot controller

Extended box

Figure 5-2 Completed System Configuration Diagram

Equipment connection 5-24

5 Equipment preparation and connection

5.3.

Software installation

This product comes with two CD-ROMs (In-Sight and MELFA-Vision). Each CD-ROM contains software necessary for starting up the vision system.

This section explains how to install this software.

Before installing the vision sensor dedicated software (In-Sight Explorer), always check the model and type of vision sensor and the version of the vision sensor dedicated software (In-Sight Explorer) being used.

Before installing MELFA-Vision, check the version of MELFA-Vision being used.

5.3.1.

Vision sensor dedicated software (In-Sight Explorer before Ver.4.1) installation

This section explains how to install the vision sensor dedicated software (In-Sight Explorer before Ver.4.1).

(1) End all applications that are running.

(2) Insert the In-Sight installation CD-ROM into the PC's CD-ROM drive. When the installation program starts automatically, the following screen is displayed.

Figure 5-3 In-Sight Software Setup Screen

(3) Select the language displayed on the right side of the screen.

(4) Click the [1] – [3] buttons in order to install the respective software.

(5) For [4], click if your PC does not yet have Adobe Reader installed. Also click to install Adobe

Reader if you have an older version.

(6) When each piece of software has been installed, "Installed" is displayed next to that item on the installation program screen. Check that "Installed" is displayed next to [1] – [3]. Whether or not to install [4] is up to your judgment.

(7) When the installation is complete, the icons for the installed software are displayed on the PC's desktop.

5-25 Software installation

5 Equipment preparation and connection

5.3.2.

Vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later) installation

This section explains how to install the vision sensor dedicated software (In-Sight Explorer Ver.4.1 or later).

(1)

End all applications that are running

(2) Insert the In-Sight installation CD-ROM into the PC's CD-ROM drive. When the installation program starts automatically, the following screen is displayed.

(3)

Click the items indicated as not installed, and install each tool

An installed check

Figure 5-4 In-Sight Software Setup Screen

(4)

When installation is completed, the icon for the installed software will appear on the personal computer's desktop

(5) Start the installed software to make sure it has been installed correctly

Software installation 5-26

5 Equipment preparation and connection

5.3.3.

MELFA-Vision

This section explains how to install MELFA-Vision (network vision sensor support software).

Install this product with the following procedure.

◆ When "MELFA-Vision" is installed in the personal computer, ". NET Framework 1.1" is installed.

If the OS is one of those followings, you must be logged on as an Administrator or as a member of the

Administrators group in order to install this software.

・ Microsoft

®

Windows

®

Microsoft

®

Windows

®

2000 Professional Operating System

XP Professional Operating System

(1) Set this CD-ROM in the personal computer's CD-ROM drive. The Setup screen will be started up automatically.

(2) If the screen does not start up automatically, carry out the following procedure.

(a) Select the [start] button and [run]

(b) Check the CD-ROM drive name. Input as shown below.

(If the CD-ROM drive is "D:", this will be "D:\Setup.exe".)

Figure 5-5 Run

(3) Installation

Start

(a) Set the CD-ROM in PC's CD-ROM drive.

(b) Open "Setup.exe" in CD-ROM.

(when it is not started automatically)

(f) Input "Customer Information"

(g) Input Product ID

(c) Starting installation Wizard

(d)Installation of .NET Framework 1.1

(When .NET Framwork 1.1 is not installed)

(e) License Agreement

* Product ID is printed on the License Certificate

(h) Choose Destination Location

(i) Installation Wizard Complete

(j) Start the program, and confirm whether the product was installed correctly

Finish

5-27 Software installation

5 Equipment preparation and connection

Below are the contents of the CD-ROM.

\:

Setup.exe

Doc

The files for installation of “MELFA-Vision”.

・・・・・・・

Instruction Manual(pdf)

Misc ・・・・

This folder contains the user registration application form(for faxing)

(4)

Installation

When the installation is complete, the installed software can be started from the Windows Start

menu. For details, see "6.3.1 Starting MELFA-Vision (network vision sensor support software)".

Software installation 5-28

6 Vision Sensor Settings

6. Vision Sensor Settings

This chapter explains the vision sensor settings for recognizing work images.

6.1.

Vision Sensor Initial Settings

The first time you use your vision sensor, if you use a DHCP server, just switching on the power for the vision sensor automatically sets its IP address, but if you are not using a DHCP server, it is necessary to make this

initial setting with the "In-Sight Connection Manager" installed with “5.3.1 Vision sensor dedicated software

(In-Sight Explorer before Ver.4.1) installation”. The method for this initial setting is as follows.

(1) From the Windows Start menu, click the installed [In-Sight Connection Manager] to start "In-Sight

Connection Manager".

Alternatively, double click displayed on the desktop.

(2) On the displayed screen select [Setup one or more In-Sight vision sensors to work on my network], then click the [Next] button.

Key point: For details on the In-Sight connection manager, see the In-Sight Explorer help.

6-29 Vision Sensor Initial Settings

6 Vision Sensor Settings

(3) Input the MAC address listed on the vision sensor main unit sticker, then click the [Add] button.

When connecting multiple vision sensors, add a MAC address for each vision sensor connected.

Also, if you restart by switching Off, then On the power for all the vision sensors set, the MAC addresses are automatically displayed in a list.

(4) Click the [Next] button.

(5) Your PC's [Subnet mask] (mandatory), [Default gateway] (option), [DNS server] (option), [Domain]

(option) settings are automatically acquired and displayed. Check that these values are correct, then click the [Next] button.

Vision Sensor Initial Settings 6-30

6 Vision Sensor Settings

(6) Input the vision sensor [New Name] (host name) and [New IP], then click the [Next] button.

Check with your network administrator for the IP address to set.

Here is an example in which an IP address of "10.50.0.100" is set.

(7) Click the [Configure] button, cut off the power for the vision sensor, wait at least 5 seconds, then switch the power back on again.

(8) When "Settings complete" is displayed in the [Status] column, the settings are complete. Finally click the [Close] button to close the screen.

6-31 Vision Sensor Initial Settings

6 Vision Sensor Settings

6.2.

Vision Sensor Initial Settings (In-Sight Explorer Ver.4.1or later)

The first time you use your vision sensor, if you use a DHCP server, just switching on the power for the vision sensor automatically sets its IP address, but if you are not using a DHCP server, it is necessary to make this

initial setting with the "In-Sight Connection Manager" installed with 5.3.1Vision sensor dedicated software

(In-Sight Explorer before Ver.4.1) installation. The method for this initial setting is as follows.

(1) From the Windows Start menu, click the installed [In-Sight Explorer4.3.1] to start "In-Sight

Connection Manager"

Alternatively, double click displayed on the desktop

(2) Select and click [System] from the displayed screen's menu bar

(3) Select and click [Add Sensor/Device To Network] from the displayed screen's menu bar

Vision Sensor Initial Settings (In-Sight Explorer Ver.4.1or later) 6-32

6 Vision Sensor Settings

(4) The following screen appears when [Add Sensor/Device to Network] is selected.

Click the [OK] button, and turn the vision sensor power OFF.

Wait at least five seconds, and then turn the power ON again.

(5) The devices to add to the network will appear, so select the displayed device and input the IP address. When finished inputting, click the [Close] button.

An IP address is required for a personal computer, a robot controller, and a camera.

Ex.) Personal computer: 192-168-0-10 .

Controller: 192-168-0-20

Camera: 192-168-0-30

Key point: For details on the In-Sight connection manager, see the In-Sight Explorer help.

6-33 Vision Sensor Initial Settings (In-Sight Explorer Ver.4.1or later)

6 Vision Sensor Settings

6.3.

Work recognition test

This section explains how to register the work to be recognized with the vision sensor and how to test recognition of this work.

6.3.1.

Starting MELFA-Vision (network vision sensor support software)

This section explains the procedure for starting MELFA-Vision, which can easily execute a work recognition test.

(1) From the Windows Start menu, click [All Programs] – [MELSOFT Application] – [RT ToolBox] –

[MELFA-Vision] to start "MELFA-Vision".

Alternatively, double click displayed on the desktop

(2) Select the appropriate vision sensor from the displayed vision sensor list, then click the [Log On] button.

Input a [User Name] whose access rights for the vision sensor are "Full access" and the [Password], then click the [OK] button.

* The [User Name] and [Password] are registered in the vision sensor. The default setting is a user name of "admin" with no password. If the user ID and password have been changed, input the new user name and password.

Figure 6-1 Starting MELFA-Vision

Work recognition test 6-34

6 Vision Sensor Settings

This section explains the MELFA-Vision main screen. For details on the MELFA-Vision functions, see "9.2

MELFA-Vision Function Details”.

(2)

Title

(1)

Window

(3) Menu

(4) Tool buttons

(5) Vision Sensor Information

(6) Job Editing

(8) Camera image

(7) Calibration data creation

(9) Status bar

Figure 6-2 Main Screen

(1) Window

The default window size is "800x600".

(2) Title

The title character string is "MELFA-Vision [logged in vision sensor name]".

6-35 Work recognition test

6 Vision Sensor Settings

(3) Menu

Menu Item

File Exit

View Refresh Job Files

Refresh Calibration Data

Sensor

Controller

Image

Connection

Adjust Lens…

Manual trigger

Online

Live Mode

Display Test Result(s)…

Image Log

User List…

Startup…

Backup…

Restore…

Clone To…

Delete Calibration Job

Delete All Files

Communication setting…

Monitor…

Table 6-1 MELFA-Vision Menu List

Sub-item

Zoom In

Zoom Out

Zoom 1:1

Zoom to Max

Zoom to Fit

Zoom to Fill

Logon…

Logoff

Communication setting

Setting…

Start Log

Quit Log

Explanation

Updates the job list display on the left of the screen.

Updates the calibration data list display at the bottom left of the screen.

Raises the magnification ratio for display of the background picture.

Lowers the magnification ratio for display of the background picture.

Sets the magnification ratio for display to

1:1.

Raises the magnification ratio for display of the background picture to the maximum.

Applies the background picture to the screen.

Adjusts so that the picture is displayed on the entire screen.

Logs on to make it possible to control the specified vision sensor.

Logs off from the specified vision sensor.

Edits the vision sensor communication settings.

Check the adjustment method for the lens mounted on the vision sensor.

Requests the vision sensor to capture an image.

Select whether the vision sensor can be controlled from the outside (online) or editing of image processing (offline).

Shoots continuously and recognizes images in real time.

Monitors the information on the work recognized by the vision sensor.

This makes the FTP settings for storing images captured with the vision sensor to the PC.

Starts the image log.

Quits the image log.

Adds, edits, and deletes the user name and password with which the vision sensor is logged on to.

This sets the initial state for when the vision sensor power has just been switched On.

Backs up and restores the vision sensor setting contents. A clone can be prepared and copied to another vision sensor.

Deletes calibration data.

Deletes all the calibration data and vision programs.

This makes the settings for the line connection between the robot controller and vision sensor.

Monitors the data acquired by the robot.

Checks the MELFA-Vision version.

Work recognition test 6-36

6 Vision Sensor Settings

(4) Tool buttons

Figure 6-3 Tool Bar

Button Tool tip

Logon/Logoff

Table 6-2 Tool Button List

Explanation

On: Logged on

Off: Logged off

Manual Trigger

Off: Offline

Each time this button is clicked, the image is shot.

Live Mode

Zoom In

On: Live display underway

Off: Live display ended

Raises the magnification ratio for display of the background picture.

Zoom Out

Lowers the magnification ratio for display of the background picture.

Zoom to Max

Increases the background image's display magnification to the maximum.

Zoom 1:1

Sets the magnification ratio for display to 1:1.

Zoom to Fit

Adjusts the background image to the screen.

Zoom to Fill

Start Log/Quit Log

Adjusts the image so it is displayed on the entire screen.

Image Log This makes the settings for FTP transfer of images captured with the vision sensor to the PC.

On: Image log reception enabled

Off: Image log reception disabled

(5) Vision Sensor Information

This displays the information for logged on vision sensors.

Control

Name

Current Job

Found No.

Threshold

Angle Start

Angle End

Figure 6-4 Vision Sensor Information (Pattern Matching)

Table 6-3 Vision Sensor Information Items (Pattern Matching)

Operation

This is the host name of the vision sensor logged onto. Blank when vision sensor logged off.

Displays the name of the job being edited.

Displays the recognition count set with the recognition conditions on the job editing screen.

Displays the threshold set with the recognition conditions on the job editing screen.

Displays the start angle set with the recognition conditions on the job editing screen.

Displays the end angle set with the recognition conditions on the job editing screen.

Control

Color

Area Limit

Greyscale

Figure 6-5 Vision Sensor Information (Blobs)

Table 6-4 Vision Sensor Information Items (Blobs)

Operation

Displays the background color and the target color for recognition set with the color setting on the job editing screen.

Displays the minimum and maximum values set with the work surface area on the job editing screen.

Displays the threshold for the grayscale minimum set with the grayscale threshold value on the job editing screen.

6-37 Work recognition test

6 Vision Sensor Settings

(6) Job Editing

A list of the job files for the logged on vision sensors is displayed and job files are managed (created, edited, name changed, updated).

Button

New (N)

Deletion (D)

Explanation

A job (vision program) is created newly.

A job is deleted.

Renewal (R)

Edit (T)

A job list is renewed.

A job is edited (change).

Name change (M) The name of a job is changed.

Alias preservation (A) The job is named and saves

Figure 6-6 Job (Vision Program) List

(7) Calibration data creation

A list of the calibration for the logged on vision sensors is displayed and calibration data is created.

Figure 6-7 Calibration List

(8) Camera image

This displays the logged-on camera image. Black when logged off.

(9) Status bar

This displays the vision sensor mode, image information for the mouse position, and PC image log reception status.

Figure 6-8 Status Bar

Control

Left frame

Center frame

Right frame

Table 6-5 Status Bar

Explanation

This displays the mouse position image information.

(X coordinate value, Y coordinate value) = Contrast value

When the vision sensor status changes, the following character strings are displayed. Anything else is blank.

・ "Online"

"Offline"

"Live"

"Incomplete online"

"Discrete online"

This displays the PC image log reception status.

When reception enabled: "Image log reception enabled"

When disabled: Blank

Work recognition test 6-38

6 Vision Sensor Settings

6.3.2.

Image

This section explains how to adjust the brightness and Diaphragm for the image captured by the vision sensor.

(1) Check the image shot with MELFA-Vision [Camera Image].

From MELFA-Vision menu, click [Sensor] – [Live Mode] or click from the tool bar to put MELFA-Vision into live image mode.

Put the work to be recognized under the vision sensor and check the resulting image with

MELFA-Vision [Camera Image].

Figure 6-9 Image Check Example

(2) If the field of vision is not appropriate, adjust the distance between the vision sensor and the work or replace the lens.

① When the image is too large

Figure 6-10 Example in Which the Image Is Too Large

When the image is too small

Figure 6-11 Example in Which the Image Is Too Small

6-39 Work recognition test

(3) If the brightness is not appropriate, adjust the lens "Diaphragm".

6 Vision Sensor Settings

Figure 6-12 Camera Lens adjustment

If the appropriate brightness can not be achieved by just adjusting the Diaphragm, provide different lighting.

Too bright

② Too dark

Figure 6-13 Example in Which the Image Is Too Bright

Figure 6-14 Example in Which the Image Is Too Dark

Work recognition test 6-40

6 Vision Sensor Settings

(4) If the focus is not appropriate, adjust the lens "focus".

Figure 6-15 Example in Which The Image Is Out of Focus

6-41 Work recognition test

6 Vision Sensor Settings

6.3.3.

Image processing settings

This section explains how to make the image processing settings, using pattern matching image processing

(only one robot, results output as robot absolute coordinate values) as an example. For details on other image

processing, see "9.3.1 Templates provided for MELFA-Vision".

(1) Click [New] under Job (Vision Program) List at the left of the MELFA-Vision main screen. Select the process method from the displayed [Processing Method] screen, and then click the [OK] button.

Figure 6-16 Selection of Image Processing Method

Work recognition test 6-42

6 Vision Sensor Settings

(2) Execute the work in order of the left to right on the displayed "Job Editing" screen.

First, adjust the image with the [Adjust Image] tab. tabs from

[Picture adjustment] tab.

Refer to Table 6-6

Track bar

(3) When you change all the displayed items, then click the [Test] button, the picture is displayed for when the setting is changed to the main screen [Camera Image], so adjust for clear contrast between the work and the background. For details on the setting items, see below.

Table 6-6 List of [Adjust Image] Tab Items

Setting item

Exposure

Gain

Setting range

0.032 - 1000

0 - 255

Orientation Normal

Mirrored horizontally

Flipped vertically

Rotated 180 degrees

Explanation

This adjusts the exposure time for images captured.

Lowering this value shortens the image take-in time and reduces the amount of light accumulated on the CDD array, so the image becomes darker. On the other hand, if this value is raised, the amount of light accumulated increases, so the image becomes brighter.

This adjusts the image brightness.

Adjust by moving the track bar left and right.

When value decreased ◆ When value increased

This changes the direction in which the image is displayed.

Normal image ◆ Mirrored horizontally Image

Flipped vertically Image ◆ Rotated 180 degrees Image

6-43 Work recognition test

Setting item Setting range

Trigger Camera

Continuous

External

Manual

Network

6 Vision Sensor Settings

Explanation

This specifies the image take-in trigger for when the vision sensor is "online".

[Camera]

The image is taken in at the rising edge detected at the camera hardware trigger input port.

[Continuous]

Images are taken in continuously.

[External]

The image is taken in at the rising edge of a discrete I/O (*1) input bit or serial command.

[Manual]

The image is taken in when the <F5> key is pressed.

[Network]

The image is taken in when the trigger is input to the master vision sensor on the network.

カメラ方向

( 上 下 設 置 方

【Direction of a camera】

Facing down or facing up is specified with the [Camera] tab.

【Photography picture】

A picture becomes a front side and the back side by the direction of a camera.

【Recognition result】

If the recognition result has the the same coordinates of a vision sensor and a robot when the same work is photographed facing up and downward, C axis component will turn into an axial component of downward (front side ) plus and upward (reverse side) minus.

(*1) For details on discrete I/O, see the "In-Sight Installation Guide" that comes with this system.

Work recognition test 6-44

6 Vision Sensor Settings

(4) The area in which work is detected, registration of work to search for, and the work position output to the robot are set with the "Job Editing" screen [Pattern & Search Area] tab.

6-45 Work recognition test

6 Vision Sensor Settings

(4-1)Determining the search area

When you click the "Search area" [Image] button, the focus shifts to the main screen and a red frame is displayed around [Camera Image] on the main screen. The registered work is detected from the area enclosed by the red frame. The area in which the work is detected can be changed with the mouse or keyboard.

If you use the keyboard, each time the [F9] key is pressed, the "area adjustment mark" changes and fine adjustments can be made with the [arrow keys]. To finalize the area, press the [OK] key; to cancel it, press the [Cancel] key. The focus returns to the "Job Editing" screen.

The [O.K.] button and [Cancel] button

Camera image

Area adjustment mark

The [O.K.] button and [Cancel] button

Work recognition test 6-46

6 Vision Sensor Settings

(4-2)Determining the recognition pattern

When you click the "Pattern select" [Image] button, the focus shifts to the main screen and a red frame is displayed around [Camera Image] on the main screen. The registered work is enclosed by the red frame. For operations, use the mouse or keyboard. If you use the keyboard, each time the [F9] key is pressed, the "area adjustment mark" changes and fine adjustments can be made with the [arrow keys]. To finalize the pattern selection, press the [OK] key; to cancel it, press the [Cancel] key. The focus returns to the "Job

Editing" screen.

The [O.K.] button and [Cancel] button

Camera image

Area adjustment mark

The [O.K.] button and [Cancel] button

6-47 Work recognition test

6 Vision Sensor Settings

(4-3)This specifies the work coordinates sent to the robot.

When you click the "Output position setting" [Image] button, the focus shifts to the main screen and a red circle is displayed at [Camera Image] on the main screen. Move this circle with the mouse or keyboard to specify what position to send to the robot for the registered work. If you use the keyboard, fine adjustments can be made with the [arrow keys]. To finalize the setting, press the [OK] key; to cancel it, press the [Cancel] key. The focus returns to the "Job Editing" screen.

Use the following steps when using MELFA-Vision earlier than Ver. 1.2.

"Display mark at center of pattern" does not appear.

Designate the area position mark of the pattern to be transmitted.

Only when using MELFA-Vision Ver. 1.2 or later.

If "Display mark at center of pattern" is checked, the pattern position area mark decided in step (4-2) will automatically appear.

The [O.K.] button and

[Cancel] button

Camera picture

Camera picture

The center of red O is a coordinates output position.

Work recognition test 6-48

6 Vision Sensor Settings

(5) This determines the recognition conditions.

When you click the "Job Editing" screen [Processing Condition] tab, the conditions for searching for the registered work are set.

When you change a displayed setting item, then click the [Test] button, the results of image processing under the specified conditions are displayed at the main screen [Camera Image], so check whether or not the work is correctly recognized. For details on the setting items, see below.

Table 6-7 List of [Processing Condition] Tab Items

Setting item

Number to Find

Accept

Setting range

1 - 255

1 - 100

Explanation

This sets the maximum number of pieces that can be detected in one image processing.

This sets how much the detected work must match the registered work in order to be recognized.

For the vision sensor, the degree of matching of the detected work is expressed as 1-100%. Work whose degree of matching is lower than the value set here is not recognized.

Sets the detected work tilt (start angle – end angle).

This sets the start angle and end angle with the angle for the registered work as 0˚.

Find

Tolerances

Angle

Start

Angle

End

-180 - 180

-180 - 180

Sort By

Offset of Rotation

Calibration No.

None

X

Y

Returns the recognized work results in the specified sort order.

When "None" is specified, the results are returned with the work sorted in order of high recognition ratio.

This sorting is used for cases such as when multiple work pieces are detected and you want to grasp the work in order from left to right in the image.

The "X" and "Y" specified here indicate the "X" and "Y" at the red frame displayed with the search area setting.

-180 - 180 When outputting the recognized work results, this function adds the specified offset amount to the detection angle.

When registering patterns, this is used if the 0˚-tilt image can

None

1 - 10 not be captured.

This selects the data when outputting the recognized work coordinate value converted to the robot coordinate value.

Work information can be converted to the coordinate systems for up to three robots and sent.

Therefore, it is possible to select calibration numbers for three robots.

* The figure above shows a screen assuming a system with one robot. When a system is selected with three robots,

[Robot 2:] and [Robot 3:] display appears.

* For all the items, if a value outside the range is input, it is replaced with the upper or lower limit value near the image going out of focus.

6-49 Work recognition test

6 Vision Sensor Settings

(5-1)This shows setting examples for the maximum detection count.

When 10 is set

When you click the "Job Editing" screen [Test] button, the 6 pieces of work captured in the image are recognized and they are displayed with "+" pointer mark and a number from 0 in order of highest degree of match attached to each piece of work.

When 3 is set

When you click the "Job Editing" screen [Test] button, the three pieces of work with the highest degree of match are detected. They are displayed with "+" pointer mark and a number from 0 to

2 in order of highest degree of match attached to each piece of work.

Work recognition test 6-50

6 Vision Sensor Settings

(5-2)This shows setting examples for the threshold.

The higher the threshold, the greater the precision of the detection.

When 40% is set as the threshold

Maximum detection count of 10:

Threshold of 40%:

When you click the "Job Editing" screen [Test] button, even though there is one piece of work at the top left, two pieces of work are recognized. Large work is also recognized and the recognition count becomes 7.

When 60% is set as the threshold

When you click the "Job Editing" screen [Test] button, only the registered four pieces of work are recognized, which is correct.

6-51 Work recognition test

6 Vision Sensor Settings

(5-3)This shows setting examples for the start angle and end angle.

When Start angle: –45˚, end angle: 45˚ is set

When you click the "Job Editing" screen [Test] button, only work is detected that is within the

±45˚ range with the registered work angle as 0˚.

When Start angle: –45˚, end angle: 180˚ is set

When you click the "Job Editing" screen [Test] button, only work is detected that is within the range –45˚ to +180˚ with the registered work angle as 0˚.

Work recognition test 6-52

6 Vision Sensor Settings

(5-4)This shows setting examples for the sort direction.

Sort direction: X

When you click the "Job Editing" screen [Test] button, the recognized work is displayed with a number from 0 in order of the +X direction (from top to bottom in the figure above) of the frame specified with the search area specification.

Sort direction: Y

When you click the "Job Editing" screen [Test] button, the recognized work is displayed with a number from 0 in order of the +Y direction (from left to right in the figure above) of the frame specified with the search area specification.

6-53 Work recognition test

6 Vision Sensor Settings

(6) The "Job Editing" screen [Image log] tab is explained in "9.2.2Job Editing screen ([Image Log]

tab)"; the [Results Cell Position] tab is explained in "9.2.3Job edit screen ([Result Cell Position] tab)".

(7) When you want to check not only the image but also the numeric data in the image processing results, click [Sensor] – [Display Test Result(s)] in the main screen menu.

In the initial display, recognition results monitors for three robots are displayed.

To view just the results for [Robot 1], move the mouse pointer to the right edge of the screen and while dragging the screen right edge, move the mouse to the left.

Work recognition test 6-54

6 Vision Sensor Settings

(8) If the recognition results are not what was expected, change the recognition conditions with the

"Job Editing" screen [Processing Condition].

(9) If the recognition results are what was expected, click the "Job Editing" screen [Save] button to save the image processing conditions set up till now to the vision sensor.

When you click the [Save] button, a "Confirmation" screen is displayed to check that you want to save the settings. If you click [No], the save is cancelled. If you click [Yes], the "Input the File

Name" screen is displayed, so input the desired vision program name, then click the [OK] button.

You can check that the file was saved with the "Job Editing" screen "File Name" or the main screen "Current Job", or the "Job (Vision Program) List".

(10) Click the "Job Editing" screen [Close] button to close the "Job Editing" screen.

6-55 Work recognition test

6 Vision Sensor Settings

(11)

Job1.job を別名で保存したい場合は、Job1.job を選択し「別名保存」をクリックしてください。

(Ex.) Here, job1 is saved to job2.

The contents of the same vision program are saved by another name.

The contents of a program, such as a front reverse side judging which used the same work, can be saved by the alias.

Work recognition test 6-56

7 Robot Controller Settings

7. Robot Controller Settings

This chapter explains the items set in the robot controller, using a system with one vision sensor and one robot controller as an example.

7.1.

Robot Controller Parameter Settings

In order for the robot controller to control the vision sensor, it is necessary to set the parameters for the communication connection with the vision sensor. This section explains the methods for setting the parameters.

(1) Switch On the robot controller power.

(2) Set the robot controller IP address.

With the teaching box (R28TB), turn the key switch in the "Enable" direction.

[MENU] – [5: Maintenance] – [1: Parameters] – Input "NETIP"– [INP] –Input the IP address.

Switch Off the robot controller power, then switch it On again.

CRn-5xx Ver.K7

RV-6S

COPYRIGHT・・

Press any key to start

[MENU]→

<Menu>

1. Teaching 2. Operation

3. Management 4. Monitor

5. Maintenance 6. Setting

[5]→

<Maintenance>

1. Parameters 2. Initialize

3. Brake 4. Origin

5. Move

[1]↓

<Parameters>

(NETIP )( )

(10.50.0.1 )

Specify data

←[INP]

<Parameters>

(NETIP )( )

( )

Specify data

“NETIP”

<Parameters>

( )( )

( )

Specify parameter name

(3) From the Windows Start menu, click [All Programs] – [MELSOFT Application] – [RT ToolBox] –

[MELFA-Vision] to start "MELFA-Vision".

7-57 Robot Controller Parameter Settings

7 Robot Controller Settings

(4) This makes the settings for the robot controller and MELFA-Vision to communicate.

Click the "Communications server" displayed on the Windows taskbar to display the "Communication Server" screen. (If the communication parameters have not been set yet, all the information is written in red.)

Click the [Setting] button to display the "Communication Setting" screen.

Click [Method], then select "TCP/IP".

Click the [Detail] button to display the "TCP/IP Communication Protocol" screen.

IP Address : Robot controller IP address.

Robot

Vision])

Click the [OK] button to finalize the settings.

Robot Controller Parameter Settings 7-58

7 Robot Controller Settings

Click the [Set(Save and Close)] button to store the communication settings you have set.

Check that all the frames on the "Communication Server" main screen become light blue. If a frame is green, redo the setting.

(5) This sets the parameters for the robot controller and vision sensor to communicate.

From MELFA-Vision menu, select [Controller] – [Communication Setting] to display the

"Communication Setting" screen.

This sets the device number for the COM number used. Here is an example in which a COM number of "COM2:" is used and the setting content is "OPT15".

Click the "COM2" pull-down, then select "OPT15".

7-59 Robot Controller Parameter Settings

7 Robot Controller Settings

From the "Device List", select [OPT15], then click the [Change] button.

On the displayed "Device Setting" screen, switch On the [Change the Parameter to connect Vision] checkbox, then input the vision sensor IP address as the IP Address.

Click the [OK] button and check that a "*" is displayed in the "Communication Setting" screen

"Device List".

Click the [Write] button to display the "Confirmation" screen.

Robot Controller Parameter Settings 7-60

7 Robot Controller Settings

If you click [Yes], the parameters are written to the robot controller. A message is displayed that the controller power will be switched Off, then On again to put the parameter change into effect.

Click the [Yes] button and wait for the robot controller power supply to be reset.

When the robot controller starts, click the [Read] button on the "Communication Setting” to check if the parameters have been written normally.

If the parameters were written normally, click the [Exit] button to close the " Communication Setting" screen.

7-61 Robot Controller Parameter Settings

7 Robot Controller Settings

7.2.

Calibration Setting

Calibration is a function that converts the vision sensor coordinate system into the robot coordinate system.

This calibration work is necessary for recognizing what position in the robot coordinate system the recognized work is at. If this setting is not made, the coordinates for work recognized by the vision sensor display the results in the sensor coordinate system.

This section explains calibration work using MELFA-Vision.

(1) Prepare the equipment used in calibration work.

Prepare four marking labels (copy the marking sheet in the appendix, align it with the image field of vision and make enlarged and reduced copies) and the calibration jigs (for example a hand with sharpened tip for specifying the center of the marking label with the robot).

(2) Set MELFA-Vision to a live image.

From the MELFA-Vision menu, click [Sensor] – [Live Mode] or from the MELFA-Vision tool bar, click to put MELFA-Vision into live image mode. Check that sinks.

(3) Adjust the mark positions so that four marking labels for calibration fit in the screen. Here is an example in the appendix marking sheet is placed. Here, the four marks are set to be marks 1-4 as in the figure below.

Mark 1

Mark 2

Mark 3

Mark 4

Calibration Setting 7-62

7 Robot Controller Settings

(4) Exit the live image.

From the MELFA-Vision menu, click [Sensor] – [Live Mode] or from the MELFA-Vision tool bar, click to exit live image mode.

(5) From the MELFA-Vision main screen, select [No. 1] in the [Calibration Data List]. This section explains [No. 1] data creation.

(6) On the "Create Calibration Data" screen, click the [About How to specify Reference Point] button to check the calibration operation method.

7-63 Calibration Setting

(7) Specify the first point on the vision sensor.

Click the [Image] button for the first point.

Mark 1

7 Robot Controller Settings

Use the mouse or the [arrow keys] to move the mark to mark 1, then click the [OK.] button.

The [OK.] button and [Cancel] button .

Mark 1

Calibration Setting 7-64

7 Robot Controller Settings

(8) Specify the second point on the vision sensor.

Click the [Image] button for the second point.

Mark 2

Use the mouse or the [arrow keys] to move the mark to mark 2, then click the [OK.] button.

The [OK.] button and [Cancel] button .

Mark 2

7-65 Calibration Setting

(9) Specify the third point on the vision sensor.

Click the [Image] button for the third point.

7 Robot Controller Settings

Mark 3

Use the mouse or the [arrow keys] to move the mark to mark 3, then click the [OK.] button.

The [OK.] button and [Cancel] button .

Mark 3

Calibration Setting 7-66

7 Robot Controller Settings

(10) Specify the fourth point on the vision sensor.

Click the [Image] button for the fourth point.

Mark 4

Use the mouse or the [arrow keys] to move the mark to mark 4, then click the [OK.] button.

The [OK.] button and [Cancel] button .

Mark 4

7-67 Calibration Setting

7 Robot Controller Settings

(11) Specify the first point with the robot.

Use the teaching box to move the robot hand to the first point.

* For this work, the use of a pointed-tip object in the hand is recommended.

On the "Create Calibration Data" screen, click the [Position] button for the first point to acquire the robot's current position.

(12) Specify the second point with the robot.

Use the teaching box to move the robot hand to the second point.

On the "Create Calibration Data" screen, click the [Position] button for the second point to acquire the robot's current position.

(13) Specify the third point with the robot.

Use the teaching box to move the robot hand to the third point.

On the "Create Calibration Data" screen, click the [Position] button for the third point to acquire the robot's current position.

Calibration Setting 7-68

7 Robot Controller Settings

(14) Specify the fourth point with the robot.

Use the teaching box to move the robot hand to the fourth point.

On the "Create Calibration Data" screen, click the [Position] button for the fourth point to acquire the robot's current position.

(15) Input a comment.

In the "Create Calibration Data" screen [Comment] column input a comment to make the meaning of this work easy to understand, then click the [Create Data] button.

(16) Check that the calibration data is created.

Check the MELFA-Vision main screen [Calibration Data List] column and check that there is an

"*" in the "No. 1" [Existance] column.

(17) Close the "Create Calibration Data" screen.

Click the "Create Calibration Data" screen [Exit] button.

7-69 Calibration Setting

7 Robot Controller Settings

(18) The calibration data is set for the created job and the recognized work is displayed with the robot coordinate system.

From the MELFA-Vision main screen "Job(Vision Program)List", select "Job1.job", then click the [Edit] button.

On the displayed "Job Editing" screen, click the [Processing Conditions] tab.

Click the "Calibration No." – "Robot 1:" pull-down, then with the "Job Editing" screen

[Processing Conditions] tab, select "1" as the [Calibration No.]

Calibration Setting 7-70

7 Robot Controller Settings

Place the work under the vision sensor, then click the "Job Editing" screen [Test] button.

From the MELFA-Vision menu, when you click [Sensor] – [Recognition Test Results], the coordinates for the recognized work are displayed with the robot coordinate system.

7-71 Calibration Setting

7 Robot Controller Settings

(19) The "Job Editing" screen "Calibration" specification has been changed, so the recognition conditions are saved.

Click the "Job Editing" screen [Save] button.

(20) Close the "Job Editing" screen.

Click the "Job Editing" screen [Exit] button.

Calibration Setting 7-72

7 Robot Controller Settings

7.3.

Robot Program Writing

In order to start (execute) image processing with the vision sensor from the robot, it is necessary to execute commands controlling the vision sensor in a robot program written in MELFA-Basic IV.

7.3.1.

Flow for starting of image processing by robot

Next is shown the method for starting image processing from a robot program.

Check the line connection with the vision sensor (State variable :M_NVOPEN)

Line connection with vision sensor (MELFA-BASIC IV :NVOPEN)

③ Vision

Vision sensor detection quantity acquisition

Vision detection position data acquisition

(State variable :M_NVNUM)

(State variable :P_NVS1 - P_NVS8)

⑥ After this, the robot is moved with the position data detected with the vision sensor.

For details on the vision program dedicated MELFA-BASIC IV commands and status variables, see "9.1

Vision Sensor Dedicated Commands and Status Variables".

7.3.2.

Writing a Sample Robot Program

The robot program below is written and stored in the robot controller. For details on the storage method, see the "RT ToolBox PC Support Software Instruction Manual".

Example acquiring data in the absolute coordinates using pattern matching

1 ' Before this program is run, the evacuation position P0, the work grasping position P1, and the work placement position P2 must have already been taught.

2 ' Example: P0=(+250.000,+350.000,+300.000,-180.000,+0.000,+0.000)(7,0)

3 ' P1=(+500.000, +0.000, +100.000, -180.000, +0.000, +10.000)(7,0)

4 ' P2=(+300.000, +400.00, +100.000, -180.000, +0.000, +90.000)(7,0)

10 IF M_NVOPEN(1)<>1 THEN

20 NVOPEN “COM2:” AS #1

30 ENDIF

' When logon has not been completed for vision sensor number 1

' Connects with the vision sensor connected to COM2.

40 WAIT M_NVOPEN(1)=1 vision sensor from the [E67] cell.

50 NVPST #1,“Job1”,“E76”,“J81”,“L85”,0,10

' Connects with vision sensor number 1 and waits for logon to be completed.

' Start vision program [Job1] and receives the number of recognitions by the

60 ' and receives the recognized coordinates from the [J81] -[L85] cells, and stores this in P_NVS1 (30).

70 MOV P0

' Moves to the evacuation point.

80 IF M_NVNUM(1)=0 THEN *NG

' If the detection count is 0, jumps to an error.

90 FOR M1=1 TO M_NVNUM(1) ' Loops once for each detection by vision sensor number 1.

100 P10=P1

' Creates the target position P10 using the vision sensor 1 results data.

110 P10.X=P_NVS1(M1).X

120 P10.Y=P_NVS1(M1).Y

130 P10.C=P_NVS1(M1).C

140 MOV P10,10 '

Moves to 10 mm above the work grasping position P10.

150 MVS P10 ' Moves to the work grasping position P10.

160 DLY 0.1 '

Wait time of 0.1 second

170 HCLOSE 1 '

Closes hand 1

180 DLY 0.2 ' Wait time of 0.2 second

190 MVS P10,10 '

Moves to 10 mm above the work grasping position P10

200 MOV P2,10 '

Moves to 10 mm above the work placement position P2

210 MVS P2 ' Moves the work placement position P2

220 DLY 0.1 '

Wait time of 0.1 second

230 HOPEN 1 '

Opens hand 1.

240 DLY 0.2 ' Wait time of 0.2 second

250 MVS P2,10 '

Moves to 10 mm above the work placement position P2

260 NEXT M1 '

Repeats.

270 HLT '

Program pause (Create the appropriate processing.)

280 END '

Exit

290 '

300 *NG '

Error processing

310 ERROR 9000 '

Error 9000 output.

320 HLT '

Program pause (Create the appropriate processing.)

330 END ' Exit

7-73 Robot Program Writing

7 Robot Controller Settings

(1) The evaluation position, work grasping position, and work placement position are taught in order to operate the robot.

Use the teaching box to open the stored robot program and open the position edit screen.

With the teaching box (R28TB), turn the key switch in the "Enable" direction.

② Input [MENU] - [1: Teaching], then press [INP] to display the command edit screen.

Press [POS] + [ADD] to display the position edit screen.

CRn-5xx Ver.K7

RV-6S

COPYRIGHT・・

Press any key to start

[MENU]→

<Menu>

1. Teaching 2. Operation

3. Management 4. Monitor

5. Maintenance 6. Setting

[1]→

<Teaching>

( )

Specify program

MS.POS(P1 ) PR:1 ST:1

X: 0.00

Y: 0.00

Z: 0.00

[POS]+[ADD]

LN:10

10 IF M_NVOPEN(1

(2) Move the robot to the evacuation position.

←[INP]

[1]↓

<Teaching>

(1 )

Specify program

Switch On the robot servo power supply and move the robot with Jog operation.

Press [Deadman Switch] + [STEP/MOVE] to switch on the servo power supply.

② Press the [Deadman Switch] + [STEP/MOVE] + [key for each axis (for example +X/-X)] to move the robot to the evacuation point.

+ +

Deadman switch

Deadman switch

(3) Teach the evacuation point.

Input the position name "P0" for the evacuation point in the position edit screen [POS] column or scroll the position names with [+/FORWD] or [-/BACKWD] to display "P0".

② Press [STEP/MOVE] + [ADD] + [ADD] ([ADD] twice) to store the current robot position as position name "P0".

MS.POS(P1 )

X: 0.00

Y: 0.00

Z: 0.00

“P0” and [INP]

[+]/[-]→

MS.POS(P0 )

X: 0.00

Y: 0.00

Z: 0.00

[STEP]+[ADD]

MS.POS(P0 )

Edit position?

MS.POS(P0 )

X: +495.93

Y: +0.00

Z: +729.43

← [STEP]

Disable

Cartesian 100%

X: 495.93

Y: 0.00

Z: 729.43

[STEP]+[ADD]↓

MS.POS(P0 )

Correcting position

(4) Teach the work grasping position and work placement position in the same way.

(5) When this work is complete, press the teaching box [MENU] button to store the robot program.

(6) Turn the teaching box key switch in the "Disable" direction.

Robot Program Writing 7-74

7 Robot Controller Settings

7.4.

Executing the automatic operation test

This section explains automatic operation that starts the program created with “7.3 Robot Program

Writing” and transports the work recognized with the vision sensor.

7.4.1.

Put the vision sensor online.

In order for the robot controller to control the vision sensor, it is necessary to put the vision sensor

"online". This section explains the work for putting the vision sensor online.

From the MELFA-Vision "Main" screen menu, click [Sensor] - [Online] or click on the toolbar button.

7.4.2.

Test by executing each step.

Open the robot program created with the teaching box and while executing one line at a time, check the robot program operations.

For details on the step execution method, see "Detailed explanations of functions and operations

(BFP-A5992)" "3.6 Debugging Operations".

CAUTION

There are command words not completed in a single step execution.

When execution does not move to the next step when the step is executed one time, execute the step again.

Example: NVOPEN requires at least seven repetitions of step execution.

7-75 Executing the automatic operation test

7 Robot Controller Settings

7.4.3.

Starting a Robot Program

This section explains the work for starting the stored robot program "1" with the robot controller operation panel (O/P).

Turn the operation panel key switch in the "Auto (Op)" direction.

② Press the [CHNG DISP] button to display the override at the Status Number.

 

NUMBER

Press the [UP/DOWN] button to set the Status Number display to "o.010". (This sets the robot override to 10%.)

④ Press the [CHNG DISP] button to display the robot program number at the Status Number.

P OOO1

Press the [UP/DOWN] button to set the Status Number display to "P.0001". (This selects robot program 1.)

Press the [SVO ON] button to switch On the robot servo power supply.

Check around the robot to make sure that everything will be safe even if the robot operates.

⑧ Press the [START] button.

Executing the automatic operation test 7-76

7 Robot Controller Settings

⑨ The main screen [Camera Image] displays the recognition results and the robot transports all the work recognized by the vision sensor. After transporting, the robot program stops.

7-77 Executing the automatic operation test

7 Robot Controller Settings

7.5.

When the robot can not grasp the work normally

This section explains what to do if the robot program started normally, but the robot could not grasp the work normally.

7.5.1.

Check the MELFA-Vision [Camera Image].

Check if the position of the work recognized by the vision sensor is correct.

(1) Check the main screen [Camera Image] and check that the "+" is on the recognized work.

(2) Check if the position of the "+" is the position specified with the "Job Editing" screen "Output position setting".

(3) If the position of the recognized work is abnormal, re-edit the MELFA-Vision job.

7.5.2.

Comparison of the position data for the work recognized by the vision sensor and the position data received by the robot

Check if the robot received the work position data normally from the vision sensor.

(1) From the main screen menu, click [Sensor] - [Recognition Test Results].

When the robot can not grasp the work normally 7-78

7 Robot Controller Settings

(2) From the main screen menu, click [Controller] - [Monitor] to display the "Monitor of Controller" screen.

This screen monitors the controller's dedicated status variables for the vision sensor.

(3) Select the line connecting the robot controller and the vision sensor (in the explanation up till now

"COM2:"), then click the [Recognition Details] button.

(4) Compare the "Display Test Result(s)" screen and "Detail Monitor" screen "P_NVS1" values to check if the robot controller is receiving the data normally.

(5) If the work position data received by the robot is abnormal, check the [Start Cell] and [End Cell] positions specified with the robot program "NVPST" command.

(6) If the work position data received by the robot is normal, re-do the calibration setting.

7-79 When the robot can not grasp the work normally

8 Maintenance

8. Maintenance

This chapter explains vision sensor data backup and restoration, the image log function, vision sensor cloning, the startup function, and user list registration overall maintenance.

8.1.

Vision Sensor Data Backup

The backup function stores on a PC all the files (*.job, *.bmp, *.jpg, proc.set and hosts.net) stored on the vision sensor.

This function can be used with the specified vision sensor either Online or Offline.

Also, although this function can be used with the specified vision sensor either logged on or logged off, since the robot and vision sensor access can be slowed down by file transfer operations, normally back up with the vision sensor offline.

This section backup work using MELFA-Vision.

(1) Display the MELFA-Vision backup screen.

From the MELFA-Vision menu, click [Sensor] - [Backup] to display the “Backup” screen.

(2) From the “Backup” screen "Sensor List", select the vision sensor to back up.

The destination to which backed up files are transferred can be changed with the [Browse] button.

Select whether to back up all of the files in the selected vision sensor, or to back up only the vision programs, and then click the [Backup] button.

Sensor List

Vision Sensor Data Backup 8-80

8 Maintenance

(3) If you select a vision sensor other than the one currently logged on the "User Name And Password" screen is displayed, so input the user name and password for the vision sensor to be backed up, then click the [OK] button.

This screen is not displayed if the currently logged on vision sensor is selected with the "Sensor

List".

This screen is also not displayed if a sensor is selected that is not logged on but that vision sensor can be logged on with the currently logged on user name and password.

(4) A confirmation screen is displayed, so check the contents, then click the [Yes] button.

* A vision sensor can be backed up even when it is online, but file transfer operations may delay the robot and vision sensor access.

(5) When the backup starts, the indicator progresses as on the screen below.

To cancel a backup that is underway, click the [Stop] button.

(6) When the backup is complete, the completion message is displayed.

When the [OK] button is clicked, display returns to the ”Backup” screen.

8-81 Vision Sensor Data Backup

8 Maintenance

8.2.

Vision Sensor Data Restoration

The restore function takes the files backed up to the PC with the backup function and returns them to the vision sensor.

The restored files are all the files that were backed up.

This function can be used with the vision sensor either logged on or logged off.

Also, although this function can be used with the specified vision sensor either Online or Offline, since the robot and vision sensor access can be slowed down by file transfer operations, it is recommended to restore with the vision sensor offline.

This section explains restoration work using MELFA-Vision.

(1) Display the MELFA-Vision restore screen.

From the MELFA-Vision menu, click [Sensor] - [Restore] to display the “Restore” screen.

(2) Specify the folder to transfer from.

The folder to transfer from can be changed with the [Browse] button.

Select the vision sensor to restore from the "Sensor List".

Select whether to restore all files or only the vision program, and then click the [Restore] button.

Sensor List

(3) If you select a vision sensor other than the one currently logged on to, the "User Name and

Password" screen is displayed, so input the user name and password for the vision sensor to be restored, then click the [OK] button.

This screen is not displayed if the currently logged on vision sensor is selected with the "Sensor

List".

This screen is also not displayed if a sensor is selected that is not logged on but that can be logged onto with the currently logged on user name and password.

Vision Sensor Data Restoration 8-82

8 Maintenance

(4) A confirmation screen is displayed, so check the contents, then click the [Yes] button.

(5) A confirmation screen is displayed to ask whether or not to enable restoration of the vision sensor network setting files.

To restore the vision sensor setting file (proc.set) and the host table file (hosts.net) vision sensor network setting file too, click the [Yes] button.

CAUTION

Only restore a network setting file to the sensor it was backed up from.

Restoring a network settings file to any other sensor can cause trouble.

(6) When the restoration starts, the indicator progresses as on the screen below.

To cancel a restoration that is underway, click the [Stop] button.

8-83 Vision Sensor Data Restoration

8 Maintenance

(7) When the restoration is complete, the completion message is displayed.

To reflect the restoration settings, restart the vision sensor.

When the [OK] button is clicked, display returns to the [Backup from Vision Sensor] screen.

Vision Sensor Data Restoration 8-84

8 Maintenance

8.3.

Vision Sensor Cloning

The cloning function can create multiple vision sensors with the same files as the original one vision sensor.

This function can be used with the vision sensors either logged on or logged off.

Also, although this function can be used with the vision sensors either Online or Offline, since the robot and vision sensor access can be slowed by file transfer operations, it is recommended to restore with the vision sensor offline.

This section explains cloning work using MELFA-Vision.

(1) Display the MELFA-Vision cloning screen.

From the MELFA-Vision menu, click [Sensor] - [Clone To] to display the “Clone” screen.

(2) Select the vision sensor to be the cloning source and the vision sensor to be turned into a clone on the "Clone" screen. Multiple vision sensors can be selected. To select multiple vision sensors, select with mouse operations while pressing down the keyboard [Shift] key or [Ctrl] key. Select whether to back up all files from the clone source vision sensor and create clones, or to create clones of only the vision program, and then click the [Clone] button

(3) If the clone source vision sensor(s) and the vision sensor(s) to be cloned from it are different, the following warning message is displayed.

To continue the work, click the [Yes] button; to cancel it, click the [No] button.

8-85 Vision Sensor Cloning

8 Maintenance

(4) A confirmation screen is displayed, so check the contents, then click the [Yes] button.

(5) When the cloning work starts, the indicator progresses as on the screen below.

To cancel cloning work that is underway, click the [Stop] button.

(6) When the cloning is complete, the completion message is displayed.

To reflect the restoration settings, restart the vision sensor.

When the [OK] button is clicked, display returns to the ”Clone” screen.

Vision Sensor Cloning 8-86

8 Maintenance

8.4.

Image Log Acquisition Settings and Reception Start/End

The image log acquisition function is a function that stores the images captured by the vision sensor with the conditions set with the job (Always/OK images/NG images) while the vision sensor is communicating with the controller in online mode.

Using this function makes it possible to check afterwards on images that could not be recognized and track down the reason why they could not be recognized.

In order to acquire the image log, the FTP server is started on the PC on which the images are stored, so set the FTP server user name and password.

This section explains the method for acquiring the image log using MELFA-Vision.

(1) Display the MELFA-Vision image log setting screen.

From the MELFA-Vision menu, click [Sensor] - [Image Log] - [Setting] or from the MELFA-Vision tool bar, click to display the "Image Log Setting" screen.

(2) Enter the FTP server user name and password on the displayed "Image Log Setting" screen.

This user name and password are the ones for the FTP server and are different from the user name and password for logging on to the vision sensor. However, the same user name and password may be set for both.

Also, enter here the same user name and password as for the "Job Editing" screen "Image Log" tab

"User Name of FTP" and "Password of FTP".

For details on the “Image Log” tab setting method, see "9.2.2 Job Editing screen ([Image Log] tab)

".

The storage destination for acquired images can be changed with the [Browse] button.

When the settings are complete, click the [OK] button to close the "Image Log Setting" screen.

The next time the "Image Log Setting" screen is opened, the screen is opened with the same settings as the previous time. To make the same settings as the previous time, remove the check from the [Change User Name] check box.

(3) When the image log is started, the FTP server is started.

From the MELFA-Vision menu, click [Sensor] - [Image Log] - [Start Log] or from the MELFA-Vision tool bar, click .

When the FTP server starts up and image log reception becomes possible,

is displayed at the right end of the status bar of the MELFA-Vision main screen.

In this state, the vision sensor images are stored in the specified folder under the conditions set with the "Jog Editing" screen "Image Log" tab.

(4) When the image log is ended, the FTP server is ended.

From the MELFA-Vision menu, click [Sensor] - [Image Log] - [Quit Log] or from the MELFA-Vision tool bar, click .

When the image log processing ends, end of the status bar of the MELFA-Vision main screen.

is no longer displayed at the right

8-87 Image Log Acquisition Settings and Reception Start/End

8 Maintenance

8.5.

Vision Startup Settings

The startup settings are a function that sets the startup conditions for when the power supply to the vision sensor is switched On (select whether to start up online or offline and select the job to load).

This section explains the work for setting the startup using MELFA-Vision.

(1) Display the MELFA-Vision startup screen.

From the MELFA-Vision menu, click [Sensor] - [Startup] to display the “Startup” screen.

(2) Select whether to start up online or offline and what job to load when starting the vision sensor.

To start online, put a check in the [Online] checkbox. To start offline, remove the check from the checkbox.

The jobs in the vision sensor are displayed in the [Job] drop-down list so select the job to load. If there is no particular job to specify, select [<New>].

When the settings are complete, click the [OK] button.

The "Startup" screen is closed and display returns to the main screen.

Vision Startup Settings 8-88

8 Maintenance

8.6.

User List Settings

The user list settings function is the function that sets the access rights for users that use the vision sensor, the FTP read and write rights, and password settings.

This section explains the work for setting the user list using MELFA-Vision.

(1) Display the MELFA-Vision "User List" screen.

From the MELFA-Vision menu, click [Sensor] - [User List] to display the “User List” screen.

(2) The "User List" screen is displayed.

There are three types of user access rights.

Table 8-1 List of User Access Rights for Vision Sensors

Access Explanation

Full

Protected

Locked

The user has full access (without restriction) to the vision sensor.

The job can be loaded, edited, and stored.

Normally log on with this right when using MELFA-Vision.

This user is not permitted to do FTP writing under the initial conditions.

However, it is possible for writing to be permitted.

This user is only permitted to check the vision sensor processing state with the MELFA-Vision camera picture.

There are two types of display item settings - normal and custom; for MELFA-Vision, the custom view is not displayed even if custom is selected.

FTP writing and reading can be permitted and prohibited with [Yes] and [No].

Also, the following three types of users can be set for the initial state for the vision sensor. The respective settings are shown in the table below.

Table 8-2 Registered User Name List

User name

Password

Access Display FTP writing FTP reading admin None monitor None operator None

Full

Locked

Normal

Custom

×

Protected Custom

×

×

8-89 User List Settings

8 Maintenance

(3) To add a user, click the [Add] button on the "User List" screen; to edit an existing user, select the user from the list and click the [Edit] button.

The "User" screen is displayed, so set the required items and click the [OK] button.

(4) To delete an existing a user, select the user from the "User List" screen list and click the [Delete] button.

Check the contents of the confirmation screen, then click the [Yes] button.

* The "admin" user can not be deleted.

(5) When the settings are complete, click the [OK] button on the "User List" screen. The "User List" screen is closed and display returns to the main screen.

User List Settings 8-90

9 Detailed Explanation of Functions

9. Detailed Explanation of Functions

This chapter explains the functions of this product in detail.

9.1.

Vision Sensor Dedicated Commands and Status Variables

The robot controller has status variables and dedicated commands for controlling vision sensors. This section explains these dedicated commands and status variables.

9.1.1.

How to Read Items

[Function] : Shows the command word function.

[Format]

Shows the command word argument input method.

<> indicates an argument.

[ ] indicates that it can be omitted.

indicates that a space is required.

[Term] :Shows the argument meaning, range, etc.

[Sample sentence]

Shows a sample sentence.

[Explanation] : Shows the functions in detail and caution item.

[Error] :Shows an error generated when the command word is executed.

9.1.2.

MELFA-BASIC IV Commands

Here are the dedicated vision sensor commands.

Table9-1 List of Dedicated Vision Sensor Commands

Command word Contents

NVOPEN Connects with the vision sensor and logs on to the vision sensor.

NVPST Starts the specified vision program and receives the results.

NVRUN Starts the specified vision program.

NVIN

NVCLOSE

Receives the results of the vision program specified with the NVRUN command.

Cuts off the connection with vision sensor.

NVLOAD

NVTRG

Puts the specified vision program into the state in which it can be started.

Requests the vision sensor to capture an image and acquires the encoder value after the specified time.

Additional command word details are shown below.

9-91 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(1) NVOPEN (network vision sensor line open)

[Function]

Connects with the specified vision sensor and logs on to that vision sensor.

[Format]

NVOPEN□“<COM number>”□AS□#<Vision sensor number>

[Term]

<Com number> (Can not be omitted):

Specify the communications line number in the same way as for the Open command.

"COM1:" can not be specified by it is monopolized by the operation panel front RS-232C.

Setting range: "COM2:" – "COM8:"

<Vision sensor number> (Can not be omitted)

Specifies a constant from 1 to 8 (the vision sensor number). Indicates the number for the vision sensor connection to the COM specified with the <COM number>.

Be careful. This number is shared with the <file number> of the Open command.

Setting range: 1 – 8

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN 'If vision sensor number 1 log on is not complete

110 NVOPEN “COM2:” AS #1 ' Connects with the vision sensor connected to COM2 and sets its number as number 1.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

[Explanation]

1) Connects with the vision sensor connected to the line specified with the <COM number> and logs on to that vision sensor.

2) It is possible to connect to a maximum of 7 vision sensors at the same time. <Vision sensor numbers> are used in order to identify which vision sensor is being communicated with.

3) When used together with the Open command, the Open command <COM number> and <File number> and the <COM number> and <Vision sensor number> of this command are shared, so use numbers other than those specified with the Open command <COM number> and <File number>.

Example: Normal example

10 OPEN “COM1:” AS #1

Error

10 OPEN “COM2:” AS #1

20 NVOPEN “COM2:” AS #2

30 number>

20 NVOPEN “COM2:” AS #2 ⇒<COM number> used

30 NVOPEN “COM3:” AS #1 ⇒ <Vision sensor

Used

It is not possible to open more than one line in a configuration with one robot controller and one vision sensor.

If the same IP address is set as when the [NETHSTIP] parameter was set, an "Ethernet parameter NETHSTIP setting" error occurs.

4) Logging on to the vision sensor requires the "User name" and "Password". It is necessary to set a user name for which full access is set in the vision sensor and the password in the robot controller [NVUSER] and [NVPSWD] parameters.

The user name and password can each be any combination of up to 15 numbers (0-9) and letters (A-Z).

(T/B only supports uppercase letters, so when using a new user, set the password set in the

vision sensor with uppercase letters.)

The user name with full access rights when the network vision sensor is purchased is "admin". The password is "". Therefore, the default values for the [NVUSER] and [NVPSWD] parameters are

[NVUSER] = "admin" and [NVPSWD] = "".

When the "admin" password is changed with MELFA-Vision or a new user is registered, change the

[NVUSER] and [NVPSWD] parameters. When such a change is made, when the content of the

[NVPSWD] parameter is displayed, "****" is displayed. If the vision sensor side password is changed, open the [NVPSWD] parameter and directly change the displayed "****" value. After the making the change, reset the robot controller power.

[Caution]

When multiple vision sensors are connected to one robot controller, set the same user name and

Vision Sensor Dedicated Commands and Status Variables 9-92

9 Detailed Explanation of Functions

password for all of them.

5) The state of communications with the network vision sensor when this command is executed can be checked with M_NVOPEN. For details, see the explanation of M_NVOPEN.

6) If the program is cancelled while this command is being executed, it stops immediately. In order to log on to the vision sensor, it is necessary to reset the robot program, then start.

7) When this command is used with multi-tasking, there are the following restrictions.

The <COM number> and <Vision sensor number> must not be duplicated in different tasks.

If the same <COM number> is used in another task, the "attempt was made to open an already

open communication file" error occurs.

       【SLOT 2】

10 NVOPEN "COM2:" AS #1

20 ・・・・・・・

       【SLOT 3】

10 NVOPEN "COM2:" AS #2

20 ・・・・・・

If the same vision sensor number is used in another task, the "attempt was made to open an

already open communication file" error occurs.

       【SLOT 2】

10 NVOPEN "COM2:" AS #1

20 ・・・・・・・

       【SLOT 3】

10 NVOPEN "COM3:" AS #1

20 ・・・・・・

8) A program start condition of "Always" and the continue function are not supported.

9) Three robots can control the same vision sensor at the same time. If a fourth robot logs on, the line for the first robot is cut off, so be careful when constructing the system.

10) The line is not closed with an End command in a program called out with a CALLP command, but the line is closed with a main program End command. The line is also closed by a program reset.

11) If an interrupt condition is established while this command is being executed, the interrupt processing is executed immediately even during processing of this command.

[Error]

1) If data type for an argument is incorrect, the "syntax error in input command" error is generated.

2) If there is an abnormal number of command arguments (too many or too few), the "incorrect argument

count" error occurs.

3) If the character specified in <COM number> is anything other than "COM2:" through "COM8:", the

"argument out of range" error occurs.

4) If the value specified as the <vision sensor number> is anything other than "1" through "8", the

"argument out of range" error occurs.

5) If a <COM number> for which the line is already connected is specified (including the <File number> for which the line has been opened with an Open command), the "attempt was made to open an already

open communication file" error occurs.

6) If the vision sensor is not connected before the line is opened, the "vision sensor not connected" error occurs. (The same set manufacturer parameter [COMTIMER] as in the Ethernet specifications is used. Currently "1s")

7) If the same <COM number> or the same <vision sensor number> is specified in another task, the

"attempt was made to open an already open communication file" error occurs.

8) If the user name or password specified in the [NVUSER] parameter (user name) and [NVPSWD]

(password) is wrong, the "wrong password" error occurs.

9) If the communications line is cut while this command is being executed, the "abnormal

communications" error occurs and the robot controller side line is closed.

10) If a program is used for which the starting condition is "Always", the "this command can not be used if

the start condition is ERR or ALW" error occurs.

9-93 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(2) NVPST (Network vision program start)

[Function]

Starts the specified vision program and obtains the results.

The data received from the vision sensor is stored in the robot controller robot status variables.

[Format]

NVPST□#<Vision sensor number>,"<Vision program (job) name>"

, "<Recognition count cell>", "<Start cell>", "<End cell>", <Type> [, <Timeout>]

[Term]

<Vision sensor number> (Can not be omitted)

This specifies the number of the vision sensor to control.

Setting range: 1 - 8

<Vision program (job) name> (Can not be omitted)

Specifies the name of the vision program to start.

The vision program extension (.job) can be omitted.

The only characters that can be used are "0" – "9", "A" – "Z", "a" – "z", "-", and "_".

<Recognition count> (Can not be omitted)

Specifies the cell in which the count of work recognized by the vision sensor is stored.

Setting range: Row: 0-399 Column: "A" – "Z" Example: "A5"

The count of work recognized by the vision sensor stored in the specified cell is saved in

M_NVNUM(*).(*=1 – 8)

* When a vision program is created with MELFA-Vision, see "9.2.3Job edit screen ([Result Cell

Position] tab)" and input the value indicated by MELFA-Vision.

<Start cell>/<End cell> (Can not be omitted)

Specifies the cell range (rows and columns) in which the results recognized by the vision sensor are stored.

The contents of the specified cell are stored in any of the status variables P_NVS*(30) ・

M_NVS*(30,10)・C_NVS*(30,10).(*=1 - 8)

Setting range: Row: 0-399 Column: "A" – "Z" Example: "A5", "C10", etc.

However, the error "specified cell value out of range" occurs when the number of data that the range specified by < Start cell > and < End cell > is included in row 30, column 10 or the cell exceeds as many as 90.

* When a vision program is created with MELFA-Vision, see "9.2.3Job edit screen ([Result Cell

Position] tab)" and input the value indicated by MELFA-Vision.

・・・・

・・

Cell space in vision program

When creating a vision program this way and acquiring the data (X, Y, C) only for Robot 1, specify

<Start cell> = "J96" <End cell> = "L98".

<Type> (Can not be omitted)

Specifies the status variable cell in which the results recognized by the vision sensor are stored.

As a result of the recognition, one cell can store plural data by switching off the comma district.

However, there is a limitation up to 255 characters or less on one cell.

The specified character-string data (two or more of one data or the comma district switching off data) preserved from < Start cell > to < End cell > is preserved in a state variable either of character type a positional type variable and a numeric type by the specification of < type >.

Setting range: 0 – 7 (

Set value 4-7 can be used since software version K7.)

Vision Sensor Dedicated Commands and Status Variables 9-94

9 Detailed Explanation of Functions

About details of a set value, see the " Table 9-2 Preservation specification to state variable by <Type> specified value".

Table9-2 Preservation specification to state variable by <Type> specified value

State of cell

Correspondence state variable(*1)

Data type

Data/Cell

P_NVS*() M_NVS*() C_NVS*() M_NVS*()

C_NVS*()

Position type

Single-pre cision real number type

Text type Single-pre cision real number type,

Text type

Two or more of comma(,) district switching off data/

Cell

P_NVS*() M_NVS*() C_NVS*() M_NVS*()

C_NVS*()

Position type

Single-pre cision real number type

Text type Single-preci sion real number type,

Text type

(*1)”*” sign of the correspondence state variable specifies < Vision Sensor Number >.

The Position data P_NVS*() is converted into the numerical value for a positional variable and it preserves it in X, Y, and Z coordinates sequentially.

When the character which cannot be converted is included, it preserves it as "0".

Moreover, the data preserved in the row since the fourth row in cell which specifies it for < Start cell > and < End cell > must not be acquired.

The Numeric type data M_NVS*() is converted into the numerical value for a numeric variable and it preserves it.

When the character which cannot be converted is included, it preserves it as "0".

M_NVS*() is two dimension array, and all the data specified for < Start cell > and < End cell > can be preserved.

It explains the content by "Explanation".

The Text type dataC_NVS*() is preserved as it is for the character type variable.

It replies from the Vision Sensor the function and kanji code etc. of the vision program as “#" character.

Moreover, it replies by the NULL character for a blank cell.

All these situations are preserved in C_NVS*() as NULL character.

C_NVS*() is two dimension array, and all the data specified for < Start cell > and < End cell > can be preserved.

The example of storing information up to 255 characters or less in one cell by switching off the comma district is shown as follows.

When "4" is specified in the <Type> in this example, "J91" is specified for the <Start cell>, and "J91" is specified for the <End cell>, the following result is obtained.

Variable Data(X,Y,Z,A,B,C,L1,L2)

P_NVS1(1) (+336.43,-71.14,+0.00,+0.00,+0.00,+122.27,+0.00+0.00)

P_NVS1(2) (+344.10,+151.54,+0.00,+0.00,+0.00,-5.78,+0.00+0.00)

P_NVS1(3) (+224.58,+274.84,+0.00,+0.00,+0.00,+31.24,+0.00+0.00)

P_NVS1(4) (+0.00,+0.00,+0.00,+0.00,+0.00,+0.00,+0.00+0.00)

・・・・・・ ・・・・・・

<Time out> (If omitted, 10)

Specifies the time-out time (in seconds).

Specification range: Integer 1-32767

9-95 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN 'If vision sensor number 1 log on is not complete

110 NVOPEN “COM2:” AS #1 'Connects with the vision sensor connected to COM2.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

140 NVPST #1,”TEST”,”E76”,”J81”,”L84”,1,10

'Starts the "Test" program, receives the recognition count from the E76 cell and the recognition results from cells J81 through L84, and stores this in M_NVS1 ().

150 'Processes referencing the acquired data.

160 ・・・・

300 NVCLOSE #1 'Cuts the line with the vision sensor connected to COM2.

[Explanation]

1) Starts the specified vision program on the specified vision sensor and receives the results.

2) Within the timeout time, does not move to the next step until the results are received from the vision sensor. However, if the robot program is stopped, this command is immediately cancelled. Processing is continued with a restart.

3) If the specified <vision program name> is already loaded, processing is executed without loading the program, so the processing time is shortened.

4) When this command is used with multi-tasking, it is necessary to execute the NVOPEN command in the task using this command. Also, use the <vision sensor number> specified with the NVOPEN command.

5) When type from "4" to "7" is specified for <Type>, the improvement of the data receiving speed from the

Vision Sensor can be expected.

A specified position of <Start Cell> and <End Cell> is different depending on a specified value of <Type>, and refer to "Job Editing" Screen - "Result Cell Position" Tab of MELFA-Vision for a specified position.

6) A program start condition of "Always" and the continue function are not supported.

7) When multi-mechanism mode is used, specified the <Start cell> and <End cell> to acquire information for the number of robots used and specify a type from "1" to "3".

Example: Handling of vision sensor information on only one mult-mechanism mode

When as in the figure above, the information to the first robot is stored in vision program sheet cells

<J96> through <M98> and the information to the second robot is stored in cells <O96> through <R98>,

<J96> and <M98> are specified as the <Start cell> and <End cell>.

When "1' is specified as the type with the NVPST command, it is stored in M_NVS1() as follows.

Column

5 6 7 8 9

3 310.81 43.65 -34.312 0.0 0.0 0.0 0.0 0.0 0.0

Vision Sensor Dedicated Commands and Status Variables 9-96

9 Detailed Explanation of Functions

Example: Handling of vision sensor information on two multi-mechanism mode

<J96> and <R98> are specified as the <Start cell> and <End cell>.

When "1' is specified as the type with the NVPST command, it is stored in M_NVS1(30,10) as follows.

Column

Row 5 6 7 8 9

2 381.288 10.846 97.048 0.0 89.582 99.582 -118.311

97.048

3 310.81 43.65 -34.312 0.0 0.0 139.151 149.151 -163.469 95.793

8) Up to three robots can control the same vision sensor at the same time, but this command can not be used by more than one robot at the same time. Use this command on any one of the robots.

Example of tracking system with three robots and one vision sensor

①Image capture request

③Data reception

③Data reception

③Data reception

Controller Controller

②Reception enabled notice

Controller(master)

②Reception enabled notice

<Procedure>

① Of the three robots, one is set as the master and the controller (master) outputs the "image capture request" to the vision sensor with the NVPST command. The vision sensor starts the image capture and when it is complete, returns that to the controller (master).

② The controller (master) outputs the "reception enabled notice" to the other two robots. (Taking cost and degree of difficulty into account, we recommend to connect between robots with I/O. The other robots are connected with Ethernet, so interactive notification with text string transmission/reception is possible.)

③ The respective robots receive the information they respectively require with NVIN commands.

9-97 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

Example assembling system with two robots and one vision sensor

②Image capture request

⑥Image capture request

⑦Data reception

③Data reception

⑤Using ON

⑧UsingOFF

①UsingON

Controller

④UsingOFF

Controller

<Procedure>

The controller using the vision sensor checks that the vision sensor is not being used by another controller and outputs the "Using" On signal to that controller.

② It outputs the "Image capture request" to the vision sensor.

When the vision sensor image processing is complete, the controller receives the necessary data.

④ The controller switches Off the "Using" signal it had output to the other controller.

The other controller executes Steps 1 - 4.

In this way, the two robot controllers use the vision sensor alternating or as necessary.

9) If an interrupt condition is established while this command is being executed, the interrupt processing is executed immediately.

[Errors]

1) If the data type for an argument is incorrect, a "syntax error in input command statement" error is generated.

2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument

count" error occurs.

3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error occurs.

4) If the NVOPEN command is not opened with the number specified as the <vision sensor number>, an

"abnormal vision sensor number specification" error occurs.

5) If the <vision program name> exceeds 15 characters, an "abnormal vision program name" error occurs.

6) If a <vision program name> uses a character other than "0" - "9", "A" - "Z", "-", or "_" (including lowercase letters), an "abnormal vision program name" error occurs.

7) If the program specified in the <vision program name> is not in the vision sensor, a "vision program

not present" error occurs.

8) If the <Recognition count cell>, <Start cell>, or <End cell> contains a number other than "0" - "399" or a letter other than "A - "Z", an "argument out of range" error occurs.

9) If there is no value in the cell specified in "Recognition count cell", an "incorrect value in recognition

count cell" error occurs.

10) If the <Start cell> and <End cell> are reversed, a "specified cell value out of range" error occurs.

11) If the number of data included in the cell which specifies it by <Start cell> and <End cell> exceeds 90, a"specified cell value out of range" error occurs.

12) If the range specified by <Star cell> and <End cell> exceeds line 30 and row 10, a"specified cell value

out of range" error occurs.

13) If the <Type> is other than "0" - "7", an "argument out of range" error occurs.

14) If the <Timeout> is other than "1" - "32767", an "argument out of range" error occurs.

15) If the vision sensor does not respond without the time specified as the <Timeout> or within the first 10 seconds if the <Timeout> parameter is omitted, a "vision sensor response timeout" error occurs.

16) If the vision program's image capture specification is set to anything other than "Camera" (all trigger command), "External trigger", or "Manual trigger", an "abnormal image capture specification" error occurs.

17) If the vision sensor is "offline", the "Put online" error occurs, so put the vision sensor "Online".

18) If the communications line is cut while this command is being executed, an "abnormal

communications" error occurs and the robot controller side line is closed.

Vision Sensor Dedicated Commands and Status Variables 9-98

9 Detailed Explanation of Functions

(3) NVLOAD (network vision sensor load)

[Function]

Loads the specified vision program into the vision sensor.

[Format]

NVLOAD□#<Vision sensor number>,”<Vision program (job) name>”

[Term]

<Vision sensor number> (Can not be omitted)

This specifies the number of the vision sensor to control.

Setting range: 1 - 8

<Vision program (job) name> (Can not be omitted)

Specifies the name of the vision program to start.

The vision program extension (.job) can be omitted.

The only characters that can be used are "0" - "9", "A" - "Z", "a" - "z", "-", and "_".

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN 'If vision sensor number 1 log on is not complete

110 NVOPEN “COM2:” AS #1 'Connects with the vision sensor connected to COM2.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

140 NVLOAD #1,”TEST” 'Loads the "Test".

150 NVPST #1, ””,”E76”,”J81”,”L84”,0,10

'Receives the recognition count recognized with the "Test" program from the E76 cell and the recognition results from cells J81 through L84, and stores them in P_NVS1().

160 ・・・・

300 NVCLOSE #1 'Cuts the line with the vision sensor connected to COM2.

[Explanation]

1) Loads the specified vision program into the specified vision sensor.

2) This command moves to the next step at the point in time when the vision program is loaded into the vision sensor.

3) If the program is cancelled while this command is being executed, it stops immediately.

4) If the specified <vision program name> is already loaded, the command ends with no processing.

5) When this command is used with multi-tasking, it is necessary to execute the NVOPEN command in the task using this command. Also, use the <vision sensor number> specified with the NVOPEN command.

6) A program start condition of "Always" and the continue function are not supported.

7) If an interrupt condition is established while this command is being executed, the interrupt processing is executed immediately.

[Errors]

1) If data type for an argument is incorrect, a "syntax error in input command statement" error is generated.

2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument

count" error occurs.

3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error occurs.

4) If the NVOPEN command is not opened with the number specified as the <vision sensor number>, an

"abnormal vision sensor number specification" error occurs.

5) If the <vision program name> exceeds 15 characters, an "abnormal vision program name" error occurs.

6) If a <vision program name> uses a character other than "0" – "9", "A" – "Z", "-", or "_" (including lowercase letters), an "abnormal vision program name" error occurs.

7) If the program specified in the <vision program name> is not in the vision sensor, a "vision program

does not exist" error occurs.

8) If the vision sensor is "offline", the "Put online" error occurs, so put the vision sensor "Online".

9) If the communications line is cut while this command is being executed, an "abnormal

communications" error occurs and the robot controller side line is closed.

9-99 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(4) NVTRG (network vision sensor trigger)

[Function]

Requests the specified vision program to capture an image.

[Format]

NVTRG□# <Vision sensor number>,<delay time>, <encoder 1 value read-out variable>

[,[<encoder

[,[<encoder

2 read-out

[,[<encoder

[Term]

<Vision sensor number> (Can not be omitted)

This specifies the number of the vision sensor to control.

Setting range: 1 - 8

<Delay time> (Can not be omitted)

This specifies the delay time (in ms) from when the image capture request is output to the vision sensor until the encoder value is obtained.

Setting range: 0 - 150 ms

<Encoder n value read-out variable> (Can be omitted from the second one on)

Specifies the double precision numeric variable into which the read out external encoder n value is set.

Note: n is 1 - 8.

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN 'If vision sensor number 1 logon is not complete

110 NVOPEN “COM2:” AS #1 'Connects with the vision sensor connected to COM2.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

140 NVRUN #1,”TEST” 'Starts the "Test" program.

150 NVTRG #1,15,M1#,M2# 'Requests the vision sensor to capture an image and acquires encoders

1 and 2 after 15 ms.

160 NVIN #1, ”TEST”,”E76”,”J81”,”L84”,0,10

'Receives the recognition count recognized with the "Test" program from the E76 cell and the recognition results from cells J81 through L84, and stores this in P_NVS1 ().

170 ・・・・

300 NVCLOSE #1 'Cuts the line with the vision sensor connected to COM2.

[Explanation]

1) Outputs the image capture request to the specified vision sensor and acquires the encoder value after the specified time. The acquired encoder value is stored in the specified numeric variable.

2) This command moves to the next step at the point in time when the encoder value is acquired the specified time after the image capture request to the vision sensor.

3) If the program is cancelled while this command is being executed, it stops immediately.

4) For receiving data from the vision sensor, use the NVIN command.

5) When this command is used with multi-tasking, it is necessary to execute the NVOPEN command in the task using this command. Also, use the <vision sensor number> specified with the NVOPEN command.

6) A program start condition of "Always" and the continue function are not supported.

7) Up to three robots can control the same vision sensor at the same time, but this command can not be used by more than one robot at the same time. Use this command on any one of the robots.

8) If an interrupt condition is established while this command is being executed, the interrupt processing is executed immediately.

Vision Sensor Dedicated Commands and Status Variables 9-100

9 Detailed Explanation of Functions

[Errors]

(1) If data type for an argument is incorrect, a "syntax error in input command statement" error is generated.

(2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument

count" error occurs.

(3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error occurs.

(4) If the NVOPEN command is not opened with the number specified as the <vision sensor number>, an

"abnormal vision sensor number specification" error occurs.

(5) If the vision program's image capture specification is set to anything other than "Camera" (all trigger command), "External trigger", or "Manual trigger", an "abnormal image capture specification" error occurs.

(6) If the vision sensor is "offline", the "Put online" error occurs, so put the vision sensor "Online".

(7) If the communications line is cut while this command is being executed, an "abnormal

communications" error occurs and the robot controller side line is closed.

9-101 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(5) NVRUN (network vision sensor run)

[Function]

Starts the specified vision program.

[Format]

NVRUN□#<Vision sensor number>,"<Vision program (job) name>"

[Term]

<Vision sensor number> (Can not be omitted)

This specifies the number of the vision sensor to control.

Setting range: 1 - 8

<Vision program (job) name> (Can not be omitted)

Specifies the name of the vision program to start.

The vision program extension (.job) can be omitted.

The only characters that can be used are "0" - "9", "A" - "Z", "a" - "z", "-", and "_".

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN 'If vision sensor number 1 log on is not complete

110 NVOPEN “COM2:” AS #1 'Connects with the vision sensor connected to COM2.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

140 NVRUN #1,”TEST” 'Starts the "Test" program.

150 NVIN #1, ”TEST”,”E76”,”J81”,”L84”,0,10

'Receives the recognition count recognized with the "Test" program from the E76 cell and the recognition results from cells J81 through L84, and stores this in P_NVS1 (30).

160 ・・・・

300 NVCLOSE #1 'Cuts the line with the vision sensor connected to COM2.

[Explanation]

1) Starts the specified vision program in the specified vision sensor.

2) This command moves to the next step after it has verified that the vision sensor has received the image capture and image processing command.

3) If the program is cancelled while this command is being executed, it stops immediately.

4) If the specified <vision program name> is already loaded, only image capture and image processing are executed. (The vision program is not loaded.))

5) For receiving data from the vision sensor, use the NVIN command.

6) When this command is used with multi-tasking, it is necessary to execute the NVOPEN command in the task using this command. Also, use the <vision sensor number> specified with the NVOPEN command.

7) A program start condition of "Always" and the continue function are not supported.

8) When multi-mechanism mode is used and data for multiple robots is required, make a vision program that creates data for multiple robots with one image capture request.

Example

9) Up to three robots can control the same vision sensor at the same time, but this command can not be used by more than one robot at the same time. Use this command on any one of the robots.

10) If an interrupt condition is established while this command is being executed, the interrupt processing is executed immediately.

Vision Sensor Dedicated Commands and Status Variables 9-102

9 Detailed Explanation of Functions

[Errors]

1) If data type for an argument is incorrect, a "syntax error in input command statement" error is generated.

2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument

count" error occurs.

3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error occurs.

4) If the NVOPEN command is not opened with the number specified as the <vision sensor number>, an

"abnormal vision sensor number specification" error occurs.

5) If the <vision program name> exceeds 15 characters, an "abnormal vision program name" error occurs.

6) If a <vision program name> uses a character other than "0" – "9", "A" – "Z", "-", or "_" (including lowercase letters), an "abnormal vision program name" error occurs.

7) If the program specified in the <vision program name> is not in the vision sensor, a "vision program

not present" error occurs.

8) If the vision program's image capture specification is set to anything other than "Camera" (all trigger command), "External trigger", or "Manual trigger", an "abnormal image capture specification" error occurs.

9) If the vision sensor is "offline", the "Put online" error occurs, so put the vision sensor "Online".

10) If the communications line is cut while this command is being executed, an "abnormal

communications" error occurs and the robot controller side line is closed.

9-103 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(6) NVIN (network vision sensor input)

[Function]

Receives the results of the recognition by the vision sensor.

The data received from the vision sensor is stored in the robot controller robot status variables.

[Format]

NVIN□#<Vision sensor number>,[<Vision program (job) name]

, <Recognition count cell>,<Start cell>,<End cell>,<Type> [,<Timeout>]

[Term]

<Vision sensor number> (Can not be omitted)

This specifies the number of the vision sensor to control.

Setting range:1 - 8

<Vision program (job) name> (Can not be omitted)

Specifies the name of the vision program to obtain the recognition results of.

If this parameter is omitted, the results are obtained from the currently active vision program.

The vision program extension (.job) can be omitted.

The only characters that can be used are "0" - "9", "A" - "Z", "a" - "z", "-", and "_".

<Recognition count> (Can not be omitted)

Specifies the cell in which the count of work recognized by the vision sensor is stored.

Setting range: Row: 0-399 Column: "A" - "Z" Example: "A5"

* When a vision program is created with MELFA-Vision, input the value specified by MELFA-Vision is input.

<Start cell>/<End cell> (Can not be omitted)

Specifies the cell range in which the results recognized by the vision sensor are stored.

The contents of the specified cell are stored in any of the status variables

P_NVS*()EM_NVS*()EC_NVS*().

(*=1 – 8)

Setting range: Row: 0-399 Column: "A" - "Z" Example: "A5", "C10", etc.

* When a vision program is created with MELFA-Vision, input the value specified by MELFA-Vision.

<Type> (Can not be omitted)

Specifies the status variable cell in which the results recognized by the vision sensor are stored.

Setting range: 0 - 7 (

Set value 4-7 can be used since software version K7.)

Refer to the explanation of NVPST for the content of the processing of a specified value.

<Time out> (If omitted, 10)

Specifies the time-out time (in seconds).

Specification range: Integer 1-32767

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN 'If vision sensor number 1 log on is not complete

110 NVOPEN “COM2:” AS #1 'Connects with the vision sensor connected to COM2.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

140 NVRUN #1,”TEST” 'Starts the "Test" program.

150 NVIN #1, ”TEST”,”E76”,”J81”,”L84”,0,10

'Receives the recognition count recognized with the "Test" program from the E76 cell and the recognition results from cells J81 through L84, and stores this in P_NVS1 (30).

160 ・・・・・

300 NVCLOSE #1 'Cuts the line with the vision sensor connected to COM2.

Vision Sensor Dedicated Commands and Status Variables 9-104

9 Detailed Explanation of Functions

[Explanation]

1) Receives the recognition results from the specified vision program of the specified vision sensor.

2) Within the timeout time, does not move to the next step until the results are received from the vision sensor.

However, if the robot program is stopped, this command is cancelled. Processing is executed from the cancelled state with a restart.

3) When this command is used with multi-tasking, it is necessary to execute the NVOPEN command and

NVRUN command in the task using this command. At this time, use the <vision sensor number> specified with the NVOPEN command.

4) When type from "4" to "7" is specified for <Type>, the improvement of the data receiving speed from the

Vision Sensor can be expected.

A specified position of <Start Cell> and <End Cell> is different depending on a specified value of <Type>, and refer to "Job Editting" Screen - "Result Cell Position" Tab of MELFA-Vision for a specified position.

5) A program start condition of "Always" and the continue function are not supported.

6) When using multi-mechanism mode, see the explanation of the NVPST command.

7) Up to three robots can control the same vision sensor at the same time, but this command can not be used by more than one robot at the same time. Use this command on any one of the robots.

8) If an interrupt condition is established while this command is being executed, the interrupt processing is executed immediately. Processing is executed when the interrupt processing ends or is continued with a restart.

9) When this command is executed, it is necessary to specify beforehand with the NVPST command or

NVRUN command the vision program specified with the <Vision program name>.

10) In order to shorten the tact time, it is possible to do other work after executing the NVRUN command and execute NVIN when it is required.

11) Note that if the program stops between NVRUN and NVIN, the results when NVRUN is executed and the results when NVIN is executed may be different.

[Errors]

1) If the data type for an argument is incorrect, a "syntax error in input command statement" error is generated.

2) If there is an abnormal number of command arguments (too many or too few), an "incorrect argument

count" error occurs.

3) If the <vision sensor number> is anything other than "1" through "8", an "argument out of range" error occurs.

4) If the NVOPEN command is not opened with the number specified as the <vision sensor number>, an

"abnormal vision sensor number specification" error occurs.

5) If the <vision program name> exceeds 15 characters, an "abnormal vision program name" error occurs.

6) If a <vision program name> uses a character other than "0" – "9", "A" – "Z", "-", or "_" (including lowercase letters), an "abnormal vision program name" error occurs.

7) If the program specified in the <vision program name> is not in the vision sensor, a "vision program

does not exist" error occurs.

8) If the program specified in the <vision program name> is not started by an NVRUN command, a

"abnormal vision program name" error occurs.

9) If the <Recognition count cell>, <Start cell>, or <End cell> contains a number other than "0" – "399" or a letter other than "A – "Z", an "argument out of range" error occurs.

10) If there is no value in the cell specified in "Recognition count cell", an "invalid value in specified for

recognition count cell" error occurs.

11) If the number of data included in the cell which specifies it by <Start cell> and <End cell> exceeds 90, a"specified cell value out of range" error occurs.

12) If the range specified by <Star cell> and <End cell> exceeds line 30 and row 10, a"specified cell value

out of range" error occurs.

13) If the <Type> is other than "0" - "7", an "argument out of range" error occurs.

14) If the <Start cell> and <End cell> are reversed, a "specified cell value out of range" error occurs.

15) If the <Type> is other than "0" – "3", an "argument out of range" error occurs.

16) If the <Timeout> is other than "1" – "32767", an "argument out of range" error occurs.

17) If the vision sensor does not respond without the time specified as the <Timeout> or within the first 10 seconds if the <Timeout> parameter is omitted, a "vision sensor response timeout" error occurs.

18) If the communications line is cut while this command is being executed, an "abnormal

communications" error occurs and the robot controller side line is closed.

9-105 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(7) NVCLOSE (network vision sensor line close)

[Function]

Cuts the line with the specified vision sensor.

[Format]

NVCLOSE□[[#]<Vision sensor number>[,[[#]<Vision sensor number>・・・]

[Term]

<Vision sensor number> (Can be omitted)

Specifies a constant from 1 to 8 (the vision sensor number). Indicates the number for the vision sensor connection to the COM specified with the <COM number>.

When this parameter is omitted, all the lines (vision sensor lines) opened with an NVOPEN command are closed.

Also, up to 8 <vision sensor numbers> can be specified. They are delimited with commas.

Setting range: 1 - 8

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN ' When logon has not been completed for vision sensor number 1

110 NVOPEN “COM2:” AS #1 ' Connects with the vision sensor connected to COM2 and sets its number as number 1.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

140 ・・・・・

300 NVCLOSE #1 'Cuts the line with the vision sensor connected to COM2.

[Explanation]

1) Cuts the line with the vision sensor connected with the NVOPEN command.

2) If the <vision sensor number> is omitted, cuts the line with all the vision sensors.

3) If a line is already cut, execution shifts to the next step.

4) Because up to seven vision sensors can be connected at the same time, <Vision sensor numbers> are used in order to identify which vision sensor to close the line for.

5) If the program is cancelled while this command is being executed, execution continues until processing of this command is complete.

6) When this command is used with multi-tasking, in the task using this command, it is necessary to close only the lines opened by executing an NVOPEN command . At this time, use the <Vision sensor number> specified with the NVOPEN command.

7) A program start condition of "Always" and the continue function are not supported.

8) If an END command is used, all the lines opened with an NVOPEN command or OPEN command are closed. However, lines are not closed with an End command in a program called out with a CALLP command.

Lines are also closed by a program reset, so when an END command or a program reset is executed, it is not necessary to close lines with this command.

9) The continue function is not supported.

10) If an interrupt condition is established while this command is being executed, the interrupt processing is executed after this command is completed.

[Errors]

1) If the value specified as the <vision sensor number> is anything other than "1" through "8", the

"argument out of range" error occurs.

2) If there are more than eight command arguments, an "incorrect argument count" error occurs.

Vision Sensor Dedicated Commands and Status Variables 9-106

9 Detailed Explanation of Functions

9.1.3.

Robot status variables

Here are the status variables for vision sensors.

Be careful. The data for these status variables is not backed up by the RT ToolBox backup function.

These status variables can be used on Ver.K6 or later of robot controller's software versions .

Table 9-3 Vision Sensor Status Variable List

Variable name Array elements Contents Attribute

(*)

Data type

M_NVOPEN 8 Line connection status R Integer type

M_NVNUM 8

P_NVS*(*=1 – 8) 30

M_NVS*(*=1 - 8) 30, 10

Vision sensor work detection count R

R

Vision sensor detection data

R

Integer type

Position type

Single-precision real number type

C_NVS*(*=1 - 8) 30, 10

(*1) R indicates that a status variable is read-only.

The details of the status variables are as follows.

(1) M_NVOPEN

[Function]

Indicates the vision sensor line connection status.

[Array meaning]

Array elements (1 - 8): Vision sensor numbers

[Explanation of values returned]

0: Line connecting (logon not complete) 1: Logon complete -1: Not connected

[Usage]

After an NVOPEN command is executed, checks whether or not the line with the vision sensor is connected and the vision sensor logged onto.

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN ' If vision sensor number 1 is not connected

110 NVOPEN “COM2:” AS #1 ' Connects with the vision sensor connected to COM2 and sets its number as number 1.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for the logon state.

140 ・・・・・

300 NVCLOSE #1

[Explanation]

'Cuts the line with the vision sensor connected to COM2.

1) Indicates the status of a line connected with a network vision sensor with an NVOPEN command when the line is opened.

2) The initial value is "-1". At the point in time that the NVOPEN command is executed and the line is connected, the value becomes "0" (line connecting). At the point in time that the network vision sensor logon is completed, the value becomes "1" (logon complete).

3) This variable strongly resembles the status of status variable M_OPEN, but whereas M_OPEN becomes "1" when the connection is verified, M_NVOPEN becomes "1" when the vision sensor logon is complete.

[Errors]

(1) If the type of data specified as an array element is incorrect, a "syntax error in input command

statement" error occurs.

(2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type" error occurs.

(3) If an array element other than "1" through "8" is specified, an "array element mistake" error occurs.

9-107 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

(2) M_NVNUM

[Function]

Indicates the number of pieces of work detected by the vision sensor.

[Array meaning]

Array elements (1 - 8): Vision sensor numbers

[Explanation of values returned]

Work detection count (0-255)

[Explanation]

1) Indicates the number of pieces of work detected by the vision sensor with the NVPST command or

NVIN command.

2) The stored recognition count is held until the next NVPST command or NVIN command is executed.

When an NVPST command or NVIN command is executed, the data is cleared to "0".

3) When the <Recognition count cell> specified with the NVPST command or NVIN command is a blank cell in the vision program or a vision program command is specified, this becomes "0".

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN ' When logon has not been completed for vision sensor number 1

110 NVOPEN “COM2:” AS #1 ' Connects with the vision sensor connected to COM2.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 completed.

' Connects with vision sensor number 1 and waits for logon to be

140 NVPST #1,”TEST”,”E76”,”J81”,”L84”,1,10

'Starts the "Test" program, receives the recognition count from the E76 cell and the recognition results from cells J81 through L84, and stores this in M_NVS1().

150 'Processes referencing the acquired data.

160 MVCNT=M_NVNUM(1)

170 ・・・・・

'Acquires the number of pieces of work recognized by the vision sensor.

300 NVCLOSE #1 'Cuts the line with the vision sensor connected to COM2.

[Errors]

1) If the type of data specified as an array element is incorrect, a "syntax error in input command

statement" error occurs.

2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type" error occurs.

3) If an array element other than "1" through "8" is specified, an "array element mistake" error occurs.

Vision Sensor Dedicated Commands and Status Variables 9-108

9 Detailed Explanation of Functions

(3) P_NVS1 - P_NVS8

[Function]

Stores the data recognized by the vision sensor in position data format.

In an NVPST command or NVIN command, when a <type> of "0" is specified, the data in the cell range specified with <Start cell> - <End cell> is stored as the X, Y, and C coordinates.

In an NVPST command or NVIN command, data must be stored in the order X, Y, C in the cells specified with <Start cell> - <End cell>.

Example:

In the above vision program, when "J96" and "L98" are specified in the <Start cell> and <End cell> of the

NVPST command or NVIN command, P_NVS1() becomes the following values.

P_NVS1(1)=(+347.14 , -20.23 , +0.00 , +0.00 , +0.00 , -158.19 , +0.00, +0.00)(0 , 0)

P_NVS1(2)=(+381.28 , +49.01 , +0.00 , +0.00 , +0.00 , +10.84 , +0.00, +0.00)(0 , 0)

P_NVS1(3)=(+310.81 , +43.65 , +0.00 , +0.00 , +0.00 , -34.312 , +0.00, +0.00)(0 , 0)

P_NVS1(4)=( +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00, +0.00)(0 , 0)

P_NVS1(5)=( +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00, +0.00)(0 , 0)

・・・・・・・

P_NVS1(30)=( +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00 , +0.00, +0.00)(0 , 0)

[Array element count]: 30

The maximum number of pieces of work that a vision sensor can recognize at one time is 255, but the maximum number of sets of work information that a robot can acquire is 30.

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN 'If vision sensor number 1 log on is not complete

110 NVOPEN “COM2:” AS #1 'Connects with the vision sensor connected to COM2.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

140 NVPST #1,”TEST”,”E76”,”J96”,”L98”,0,10

'Starts the "Test" program, receives the recognition count from the E76 cell and the recognition results from cells J96 through L98, and stores this in P_NVS1().

150 MVCNT=M_NVNUM(1) 'Acquires the number of pieces of work recognized by the vision sensor.

330 FOR MCNT=1 TO MVCNT 'Repeated once for each piece of work recognized

340 P10=P1 'Copies the reference position P1 to target position P10.

350 P10=P10*P_NVS1(MCNT) 'Corrects the difference from the reference work for the recognized work and substitutes it in P10.

360 MOV P10,-50 'Moves to above the first recognized piece of work.

370 MVS P10 'Moves to the position of the first recognized piece of work.

380 HCLOSE 1 'Grasps the work.

390 MVS P10,-50 'Moves to above the first recognized piece of work

400 NEXT MCNT

9-109 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

[Explanation]

1) In an NVPST command or NVIN command, when a <type> of "0" is specified, the data recognized by the vision sensor is stored in position data format.

2) When this variable is used, write the vision program to store the data in the order X, Y, and C.

Example:

3) The stored data is held until the next NVPST command or NVIN command is executed. However, this data is cleared by a program reset, END command, or power supply reset. Even if the continue function is enabled, the data is cleared (to 0 for all axes) for a power supply reset.

4) Also, if anything other than "0" is specified as the type with the NVPST command or NVIN command, all axes are cleared to "0".

5) If the acquired data is a vision program function or character string, "0" is stored in the corresponding axis.

6) The data for this variable is the valid position data for 8 axes.

7) When using multi-mechanism mode, see the explanation of the NVPST command.

[Errors]

1) If the type of data specified as an array element is incorrect, a "syntax error in input command

statement" error occurs.

2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type" error occurs.

3) If an array element other than "1" through "30" is specified, an "array element mistake" error occurs.

Vision Sensor Dedicated Commands and Status Variables 9-110

9 Detailed Explanation of Functions

(4) M_NVS1 - M_NVS8

[Function]

Stores the data recognized by the vision sensor in numeric data format.

In an NVPST command or NVIN command, when a <type> of "1" or "3" or “5” or “7” is specified, the data in the cell range specified with <Start cell> - <End cell> is converted into numbers and stored.

Example:

In the above vision program, when "J96" and "Q98" are specified in the <Start cell> and <End cell> of the NVPST command or NVIN command, the value of M_NVS1() becomes the following values.

Element 2

Element 1

1 347.147 -158.198

97.641 0.0

72.645 0.0

2 381.288 49.018 10.846 97.048 0.0

89.582 99.582 -118.311

0.0

43.65 -34.312 -163.469

0.0

4 0.0 0.0 0.0 0.0

5 0.0 0.0 0.0 0.0

[Array element count]: (30.10)

It is possible to acquire 30 lines and 10 columns of information from all the cell information in the vision program.

[Sample sentence]

100 IF M_NVOPEN(1)<>1 THEN 'If vision sensor number 1 log on is not complete

110 NVOPEN “COM2:” AS #1 ' Connects with the vision sensor connected to COM2 and sets its number as number 1.

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

140 NVPST #1,”TEST”,”E76”,”J96”,”Q98”0.1,10

'Starts the "Test" program, receives the recognition count from the E76 cell and the recognition results from cells J96 through Q98, and stores this in M_NVS1().

150 MVCNT=M_NVNUM(1) 'Acquires the number of pieces of work recognized by the vision sensor.

330 FOR MCNT=1 TO MVCNT 'Repeated once for each piece of work recognized

340 P10=P1 'Copies the reference position P1 to target position P10.

340 P10.X=M_NVS1(MCNT,1) 'Substitutes the X coordinate of the recognized work in the X coordinate of P1.

350 P10.Y=M_NVS1(MCNT,2) 'Substitutes the Y coordinate of the recognized work in the Y coordinate of P1.

360 P10.C=M_NVS1(MCNT,3) 'Substitutes the C coordinate of the recognized work in the C coordinate of P1.

370 MOV P10,-50 'Moves to above the first recognized piece of work

380 MVS P10 'Moves to the position of the first recognized piece of work

390 HCLOSE 1 'Grasps the work.

400 MVS P10,-50 'Moves to above the first recognized piece of work

410 NEXT MCNT

9-111 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

[Explanation]

1) In an NVPST command or NVIN command, when a <type> of "1" or "3" or “5” or “7” is specified, the data recognized by the vision sensor is stored in numeric data format.

2) The stored data is held until the next NVPST command or NVIN command is executed. However, this data is cleared (to 0) by a program reset, END command, or power supply reset.

Also, if anything other than "1" and "3" and “5” and “7” is specified as the type with the NVPST command or NVIN command, the data is cleared to "0".

3) If the acquired data is a vision program function or character string, "0" is stored in the corresponding axis.

4) When using multi-mechanism mode, see the explanation of the NVPST command.

[Errors]

1) If the type of data specified as an array element is incorrect, a "syntax error in input command

statement" error occurs.

2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type" error occurs.

3) If an array element 1 specification specifies other than "1" through "30", an "array element mistake" error occurs.

4) If an array element 2 other than "1" through "10" is specified, an "array element mistake" error occurs.

Vision Sensor Dedicated Commands and Status Variables 9-112

9 Detailed Explanation of Functions

(5) C_NVS1 - C_NVS8

[Function]

Stores the data recognized by the vision sensor in text string data format.

In an NVPST command or NVIN command, when a <type> of "2" or "3" or “6” or “7” is specified, the data in the cell range specified with <Start cell> - <End cell> is stored.

Example:

In the above vision program, when "J95" and "Q98" are specified in the <Start cell> and <End cell> of the

NVPST command or NVIN command, the value of C_NVS1() becomes the following values.

Element 2

Element 1

1 2 3 4 5 6 7 8 9

2 “347.147

3 “381.289

4

“-20.232”

“-158.19

8”

“49.017” “10.844”

“310.81” “43.649” “-34.313”

“97.641”

“97.227”

“96.217”

“”

“110.141

“120.141

“72.645” “97.641”

“” “89.585” “99.585”

3”

“”

“139.151

“149.151

“-163.47”

“97.227”

“96.217”

[Array element count]: (30.10)

It is possible to acquire 30 lines and 10 columns of information from all the cell information in the vision program.

[Sample sentence]

10 DIM MSCORE(100) 'Declares the variable for storing scores.

100 IF M_NVOPEN(1)<>1 THEN ' When logon has not been completed for vision sensor number 1

110 NVOPEN “COM2:” AS #1 number as number 1.

' Connects with the vision sensor connected to COM2 and sets its

120 ENDIF

130 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

140 NVPST #1,”TEST”,”E76”,”J95”,”Q98”,3,10

'Starts the "Test" program, receives the recognition count from the E76 cell and the recognition results from cells J95 through Q98, and stores this in C_NVS1(1,1) - and M_NVS1(1,1) - .

150 MVCNT=M_NVNUM(1)

160 FOR MCNT2=1 TO 8

'Acquires the number of pieces of work recognized by the vision sensor.

170 IF C_NVS1(1,MCNT2)=”Score” THEN BREAK

180 NEXT MCNT2

300 FOR MCNT1=1 TO MVCNT

'Repeated once for each piece of work recognized

310 MSCORE(MCNT1)=VAL(C_NVS1(MCNT1+1,MCNT2)) ' Stores the score for the recognized work into

MSCORE

320 NEXT MCNT1

330 MOK=0 ' Clears MOK

330 FOR MCNT=1 TO MVCNT 'Repeated once for each piece of work recognized

340 IF MSCORE(MCNT)>90 THEN MOK=MOK+1 'If the score is 90 points or higher, adds 1 to MOK.

350 NEXT MCNT

360 ' Checks the value of MOK and checks the number of pieces of work for which the score is 90 points or higher.

9-113 Vision Sensor Dedicated Commands and Status Variables

9 Detailed Explanation of Functions

[Explanation]

1) In an NVPST command or NVIN command, when a <type> of "2" or "3" or “6” or “7” is specified, the data recognized by the vision sensor is stored in text string format.

However, kanji codes can not be acquired.

2) The maximum number of pieces of work that a vision sensor can recognize at one time is 255, but the maximum number of sets of work information that a robot can acquire is 30.

3) The stored data is held until the next NVPST command or NVIN command is executed. However, this data is cleared by a program reset, END command, or power supply reset.

Also, if anything other than "2" and "3" and “6” and “7” is specified as the type with the NVPST command or NVIN command, the data is cleared to null.

4) If the acquired data is a vision program function or kanji code, a null character is stored in the corresponding axis.

5) When using multi-mechanism mode, see the explanation of the NVPST command.

[Errors]

1) If the type of data specified as an array element is incorrect, a "syntax error in input command

statement" error occurs.

2) If there is an abnormal number of array elements (too many or too few), an "incorrect argument type" error occurs.

3) If an array element 1 other than "1" through "30" is specified, an "array element mistake" error occurs.

4) If an array element 2 other than "1" through "10" is specified, an "array element mistake" error occurs.

Vision Sensor Dedicated Commands and Status Variables 9-114

9 Detailed Explanation of Functions

9.2.

MELFA-Vision Function Details

This section explains MELFA-Vision functions other than those explained in "Chapter 5 - 0.

9.2.1.

MELFA-Vision Screen

For explanations concerning the MELFA-Vision main screen, see "6.3.1 Starting MELFA-Vision (network vision sensor support software)".

Figure 9-1 MELFA-Vision Main Screen

9-115 MELFA-Vision Function Details

9 Detailed Explanation of Functions

9.2.2.

Job Editing screen ([Image Log] tab)

On the job edit screen [Image Log] tab, the conditions are set for the PC in which the images captured with the vision sensor are stored to the PC. It is necessary to start the FTP server on the PC storing the images

For explanations concerning the MELFA-Vision main screen, see "8.4 Image Log Acquisition Settings and

Reception Start/End".

This section explains the conditions set with the [Image Log] tab.

Figure 9-2 Job Edit Screen [Image Log] Tab

Setting item

Save the

Image Log

Save Condition

Table 9-4 Job Edit Screen [Image Log] Tab Setting Item List

Explanation

To acquire the image log, put a check in the [Save the Image Log] checkbox. To not acquire the image log, remove the check from the checkbox.

Sets the condition for storing images captured with the vision sensor to the PC.

Always : All images captured with the vision sensor are stored

OK images : The image is stored if the count of recognized pieces of work is not 0.

NG images : The image is stored if the count of recognized pieces of work is 0.

File Name

Max Number

Images captured with the vision sensor are stored as bit map (bmp) files on the PC.

This specifies the file name.

* Up to 50 files names can be specified.

This specifies the number of images stored on the PC.

Serial numbers up to specified number of images are attached after the file name.

Example: When the file name is "NGImage"

“NGImage001.bmp”、“NGImage002.bmp”・・・“NGImage010.bmp”・・・

If the specified number of images is exceeded, the serial number is reset and already stored bmp files are overwritten.

Reset

User Name of FTP

(*1)

Password of FTP

(*1)

This resets the serial numbers attached to the file name.

The file name for the image captured after resetting becomes "file name 001.bmp".

Specifies the FTP server user name set with the displayed "Image Log Setting" screen.

Specifies the FTP server password set with the displayed "Image Log Setting" screen.

IP Address of FTP Specifies the IP address of the PC on which the FTP server is running.

Get IP Address When an MELFA-Vision FTP server is used and this button is clicked, the IP address

From PC of the PC MELFA-Vision is running on is displayed in the [IP Address of FTP].

(*1) For details on the "Image Log Setting" screen, see "

8.4 Image Log Acquisition Settings and Reception

Start/End".

MELFA-Vision Function Details 9-116

9 Detailed Explanation of Functions

9.2.3.

Job edit screen ([Result Cell Position] tab)

The “Job Editing” screen [Result Cell Position] tab displays "Found Number Cell", "Start", and "End" specified with the dedicated MELFA-BASICIV command for the network vision sensor. A cell is a position indicated by the column character and row character in the vision program.

This section explains the cell positions displayed on the screen.

Figure 9-3 Job Editing Screen ([Result Cell Position] Tab)

Setting item

Found

Number Cell

Robot 1

Robot 2 (*1)

Robot 3 (*2)

[Type]: 0 to 3

[Type]: 4 to 7

Table 9-5 Job Editing Screen [Result Cell Position] Tab Display Item List

Explanation

The number of pieces of work recognized by the vision sensor is stored in the displayed cell position.

The coordinates (robot 1 coordinates) for the work recognized by the vision sensor are stored from the displayed start cell position to the displayed end cell position.

The coordinates (robot 2 coordinates) for the work recognized by the vision sensor are stored from the displayed start cell position to the displayed end cell position.

The coordinates (robot 3 coordinates) for the work recognized by the vision sensor are stored from the displayed start cell position to the displayed end cell position.

When you specify the range from 0 to 3 for a value of the type of "NVPST" and

"NVIN" command of MEFLA-BASIC IV command, the position of the cell which specifies it for A and B is shown.

When you specify the range from 4 to 7 for a value of the type of "NVPST" and

"NVIN" command of MEFLA-BASIC IV command, the position of the cell which specifies it for A and B is shown.

(*1) Displayed when a job for two robots is selected.

(*2) Displayed when a job for three robots is selected.

9-117 MELFA-Vision Function Details

9 Detailed Explanation of Functions

9.2.4.

Vision sensor network settings

The vision sensor network settings can be changed. From the MELFA-Vision menu, click [Sensor] –

[Connection] – [Communication Setting] to display the "Network Settings" screen. Check with your network administrator for the items to set.

Setting item

Host Name

Use DHCP Server

IP Address

Subnet Mask

Default Gateway

DNS Server

Domain Name

DHCP Timeout

Transition to Time Out

Auto Delete

Figure 9-4 MELFA-Vision "Network Settings" Screen

Table 9-6 "Network Settings" Screen Setting Items List

Explanation

Changes the vision sensor name.

Check this when using the DHCP server to allocate the IP address.

Input the IP address.

Defines which part of the IP address shows the network and which part shows the host.

Data can be relayed between hosts on different networks by specifying the gateway address.

Input the network host IP that supplies the DNS resolution (converting from host name to IP address).

Defines the domain name of the network the vision sensor is on.

Specifies the DHCP server response wait time.

Shifts to another connection without closing the connection.

Closes the connection.

MELFA-Vision Function Details 9-118

9 Detailed Explanation of Functions

9.3.

Vision program detailed explanation

MELFA-Vision provides a number of programs (job files) as templates.

This section explains the templates provided.

9.3.1.

Templates provided for MELFA-Vision

The table below shows the templates provided for MELFA-Vision.

Table 9-7 List of Job Templates Provided

No. Image processing (*1) Output coordinates

(*4)

Number of robots

(*5)

Displayable result count (*6)

25

26

27

28

29

30

21

22

23

24

17

18

19

20

13

14

15

16

9

10

11

12

5

6

7

8

1

2

3

4

Pattern matching

(*2) coordinates coordinates

Blob (binarization processing)

(*3)

Color(*7)

Absolute

Relative

Absolute coordinates

1

2

3

1

1

4

10

20

30

1

4

10

20

30

1

4

10

20

30

1

4

10

20

30

1

4

10

20

30

1

4

10

20

30

(*1) Pattern matching, blobs, edges, histograms, ID recognition, text comparison, etc. are provided for vision sensor image processing, but the image processing supported by MELFA-Vision is pattern matching and blobs.

(*2) Pattern matching is a method of detecting patterns in images based on registered patterns.

(*3) Blobbing is a method for detecting two-dimensional shape information such as the size, shape, position, linking, etc. of patterns expressed as blobs.

The blob template is effective for the following types of subjects.

Large subjects

・ Subjects whose shapes change irregularly

For details on blob image processing, see "9.3.2 Image processing - blobs".

9-119 Vision program detailed explanation

9 Detailed Explanation of Functions

(*4) There are two types of output coordinates.

Table 9-8 List of Coordinates Output to Robot

Output method

Absolute coordinate output

Relative coordinate output

Explanation

The detected pattern position is output converted to the robot coordinate system.

The detected pattern position is output with the robot coordinate system offset quantities for the relative position based on the registered pattern position.

(*5) Templates are also provided that secure the data for two robots or for three robots with one image.

When multiple robots are connected to one vision sensor, it is possible to acquire the operation positions for each robot by capturing one image.

(*6) It is necessary for the vision program to prepare beforehand an area in which to store the work position data that the robot acquires. Expanding this area increases the amount of information that the robot can obtain, but also increases the vision program load time and the time for sending the information from the vision sensor to the robot. Therefore, MELFA-Vision provides templates for areas for storing

1/4/10/20/30 sets of work information. Select the one that best matches your system.

These quantities - 1/4/10/20/30 – indicate the maximum number of sets of work data that the robot can acquire. For example, for acquiring 8 sets of work data, select the 10 template.

(*7) Color is a method to detect the pattern in the image based on the specified color pattern.

Refer to “9.3.3 Image processing

– Color” for details of the color image processing.

Vision program detailed explanation 9-120

9 Detailed Explanation of Functions

9.3.2.

Image processing - blobs

This section explains how to make the blob image processing settings, using pattern matching image processing (only one robot, results output as robot absolute coordinate values) as an example.

(1) In the [Job(Vision Program)List] on the left side of the main MELFA-Vision screen, click [New].

From the "Image Processing Method" screen displayed, select blob image processing, then click the [OK] button.

Figure 9-5 Blob Image Processing Selection

9-121 Vision program detailed explanation

9 Detailed Explanation of Functions

tabs from left to right on the displayed "Job Editing" screen.

For details on the [Adjust Image] tab, see "6.3.3 Image processing settings".

(3) When you click the "Job Editing" screen [Search Area & Condition(1)] tab, the conditions for executing blob image processing are set.

When you change a displayed setting item, then click the [Test] button, the results of image processing under the specified conditions are displayed at the main screen [Camera Image], so check whether or not the work is correctly recognized.

For details on the setting items, see "Table 9-9 List of [Search Area and Recognition Condition

(1)] Tab Items".

Table 9-9 List of [Search Area and Recognition Condition (1)] Tab Items

Setting item Setting range Explanation

Color

Setting

Search Area

Blob Black/white/Either Select the color of the work to be recognized

(black or white or any desired color)

Background Black/white This specifies the color (black or white) that is the background for captured images.

Click the [Image] button to set the range for detecting the work (blob).

For details on the setting method, see “6.3.3

Image processing settings".

Number to Find 1 - 255

Area

Limit

Fill Holes

Min

Max

0 - 900000

ON/OFF

This sets the maximum number of pieces that can be detected in one image processing.

This sets the surface area range (minimum to maximum) for detected work (blobs). This surface area range is specified in pixels.

When there are holes in the work, to recognize the work in the state with the holes filled, put a check in the [Fill Holes] checkbox. To recognize with holes unfilled, remove the check from the checkbox.

* For all the items, if a value outside the range is input, it is replaced with the upper or lower limit value near the image going out of focus.

Vision program detailed explanation 9-122

9 Detailed Explanation of Functions

(4) When you click the "Job Editing" screen [Processing Condition(2)] tab, the conditions for executing blob image processing are set.

When you change a displayed setting item, then click the [Test] button, the results of image processing under the specified conditions are displayed at the main screen [Camera Image], so check whether or not the work is correctly recognized.

For details on the setting items, see "Table 9-9 List of [Search Area and Recognition Condition

(1)] Tab Items".

Table 9-10 List of [Processing Condition(2)] Tab Items

Setting item

Setting range

Explanation

Manual Threshold

Greyscale Threshold

Sort By

Offset of Rotation

1 - 100

1 - 255

None

X

Y

This sets what degree of recognition is required for recognition of work detected with the threshold specified with the grayscale threshold.

For the vision sensor, the degree of matching of the detected work is expressed as 1-100%. Work whose degree of matching is lower than the value set here is not recognized.

This sets the grayscale threshold. When the "Auto Setting" checkbox is checked, the value is set automatically from the images captured.

Returns the recognized work results in the specified sort order.

When "None" is specified, the results are returned with the work sorted in order of high recognition ratio.

This sorting is used for cases such as when multiple work pieces are detected and you want to grasp the work in order from left to right in the image.

The "X" and "Y" specified here indicate the "X" and "Y" at the red frame displayed with the search area setting.

-180 - 180 When outputting the recognized work results, this function adds the specified offset amount to the detection angle.

When registering patterns, this is used if the 0˚-tilt image can not be captured.

Calibration No. None

1 - 10

This selects the data when outputting the recognized work coordinate value converted to the robot coordinate value.

Work information can be converted to the coordinate systems for up to three robots and sent.

Therefore, it is possible to select calibration numbers for three robots.

* The figure above shows a screen assuming a system with one robot.

When a system is selected with three robots, [Robot 2:] and [Robot 3:] display appears.

* For all the items, if a value outside the range is input, it is replaced with the upper or lower limit value near the image going out of focus.

(5) For details on the [Image Log] tab, see "9.2.2Job Editing screen ([Image Log] tab) "; for details

on the [Results Cell Position] tab, see "9.2.3 Job edit screen ([Result Cell Position] tab) ".

9-123 Vision program detailed explanation

9 Detailed Explanation of Functions

9.3.3.

Image processing – Color

This section explains how to make the Color image processing settings, using pattern matching image processing (only one robot, results output as robot absolute coordinate values) as an example.

(1) In the [Job(Vision Program)List] on the left side of the main MELFA-Vision screen, click [New]. From the "Image Processing Method" screen displayed, select Color image processing, then click the [OK] button.

Figure 9-6 Color Image Processing Selection

Vision program detailed explanation 9-124

9 Detailed Explanation of Functions

(2) Execute the work in order of the tabs from left to right on the displayed "Job Editing" screen.

(3) [White Balance] button of [Adjust Image] tab is clicked, and a standard color is specified.

[White Balance] button is displayed only in case of the color image processing.

The image in the state that the [White Balance] button is not clicked cannot accurately recognize the color like “Figure 9-1 When you do not click [White Balance] button”.

Figure 9-2 When you do not click [White Balance] button

[White Balance] button is clicked, standard color is recognized and the RGBcolor can be recognized more accurately.

A standard color is a color of 100*100 dots at the center of the camera picture when [White Balance] button is clicked.

In this case, standard color is made white, it comes to recognize the color by a color near man's recognition.

Figure 9-3 When you click [White Balance] button

9-125 Vision program detailed explanation

9 Detailed Explanation of Functions

(4) The condition of executing the color image processing is set in [Color] tab of "Job Editing" screen.

When [Test] button is clicked after the displayed set item is changed, the result of processing the image on

[Camera Image] of the main screen and the condition of specifying it is displayed.

Confirm whether the light and shade of work is clear according to the specified color on the screen.

For details on the setting items, see "Table 9-1 List of [Color] Tab Items"

Table 9-2 List of [Color] Tab Items

Setting item

Color Area Setting

Setting range Explanation

Representation

Select Filter

Histogram

Threshold

ON/OFF

ON/OFF

-1:Auto

0 - 255:Manual

[Image] button is clicked, shifts to a graphic image and a square frame is displayed.

Enclose the color which wants to be recognized with the frame and click the [Enter] key.

Whether the specified color is acquired in

RGB(Red/Green/Blue) information or it acquires it in

HIS(Hue/Intensity/Saturation) information is selected.

Whether the image displayed on the screen is displayed by the color or it displays it with grey scale which puts the specified color filter is selected.

Color specified by [Color Area Setting] information is displayed.

When [Representation] CheckBox is OFF, information on

RGB is displayed.

And the CheckBox is ON, information on HIS is displayed.

Initial value is "-1".

When the value is "-1", the color of work is converted into white putting the color specified by [Color Area Setting] filter.

When the light and shade of work is not clear by the self adjustment filter, the color recognize can be adjusted which input the value of 0 - 255 to “Threshold”.

Vision program detailed explanation 9-126

9 Detailed Explanation of Functions

(4-1)Color Area Setting is specified.

When [Image] button of “Color Area Setting” is clicked, foci move to the main screen, and Area adjustment mark is displayed in [Camera Image] .

The color which wants to be recognized from the area enclosed with this frame is detected.

Enclose the color which wants to be recognized with the frame.

The area in which the work is detected can be changed with the mouse or keyboard.

If you use the keyboard, each time the [F9] key is pressed, the "area adjustment mark" changes and fine adjustments can be made with the [arrow keys].

To finalize the area, press the [Enter] key; to cancel it, press the [ESC] key.

The focus returns to the "Job Editing" screen.

When press the [Enter] key, color specified for “Histogram” information is displayed.

Camera Image

Area adjustment mark

9-127 Vision program detailed explanation

9 Detailed Explanation of Functions

(4-2)Changes to the gray-scale imagery by the color which specifies [Camera Image].

[Select Filter] CheckBox is ON, and [Test] button is clicked.

Specified color is displayed in white putting the same filter as the specified color.

Vision program detailed explanation 9-128

9 Detailed Explanation of Functions

(4-3)Color is adjusted.

Value of "Threshold" is changed, and [Test] button is clicked.

For instance, when green is recognized more emphatically the value of "Green" is increased, and other items are decreased.

Refer to "Histogram" value for the value.

9-129 Vision program detailed explanation

9 Detailed Explanation of Functions

(4-4)When work is recognized specifying not only Hue but also Saturation and Intensity, "Representation" is changed.

"Representation" CheckBox is ON, all the values of the item of "Threshold" are set to "-1", and [Test] button is clicked.

Vision program detailed explanation 9-130

9 Detailed Explanation of Functions

(4-5)Color is adjusted.

Value of "Threshold" is changed, and [Test] button is clicked.

For instance, when recognize the vivid color work the value of "Saturation" is increased, and other items are decreased.

Refer to "Histogram" value for the value.

(5) Pattern and Search Area are specified.

[Pattern & Search Area] tab is displayed, and the recognized work is registered.

Refer to "6.3.3 Image processing settings" for registration method.

(6) The condition of recognizing it is specified.

[Pattern & Search Area] tab is displayed, and the recognized work is registered.

[Condition] tab is displayed, and the condition of recognizing it is specified.

Refer to "6.3.3 Image processing settings" for registration method.

9-131 Vision program detailed explanation

9 Detailed Explanation of Functions

9.3.4.

Using image processing for which there is no template

The only templates provided for MELFA-Vision are pattern matching and blobs. * When using a robot using

other image processing, write the vision program using "In-Sight Explorer" installed on the PC with "5.3.1

Vision sensor dedicated software (In-Sight Explorer before Ver.4.1) installation".

For details on how to write a vision program using "In-Sight Explorer", see the "In-Sight Explorer" help.

Vision program detailed explanation 9-132

9 Detailed Explanation of Functions

9.3.5.

To shorten the time for transferring data with the robot controller

The image processing templates prepared for MELFA-Vision use the mechanism of transferring the information on recognized work to the robot controller one set at a time (three communications, X, Y, and C per piece of work).

When it is desired to shorten the tact time, it is recommended that the vision program and robot program be altered to shorten the data transfer time.

Below is the method for transferring a maximum of four sets of data (127 bytes maximum) in each data transfer.

<Vision program change example>

Before change

POINT

POINT

POINT

Data exists in each cell in the vision program and the robot controller can use them without processing the acquired values.

However, in the example above, since a total of 12 data transfers are required for cells [J81] through [L84], the transfer time becomes longer.

After change

The value converts errors into a text string as is. Text string cells are linked to form a single cell.

The above program is added to the vision program before change.

Cells [O81] through [Q84] use the vision program "count error" function. If there is an error, they display the character "E". Also, cell [S81] stores the data for the four cells [O81] – [Q84] in one cell. The vision program

"concatenate" function is used. Coordinates are delimited with "," and recognized work is delimited with "/".

For details on the functions used in vision programs, see the "In-Sight Explorer" help.

CAUTION

The maximum number of characters the robot can receive in one communication is 127.

Due to restrictions on communications with the robot, if the information for one piece of work is X, Y, and C, one data transfer can handle up to four sets of data.

9-133 Vision program detailed explanation

9 Detailed Explanation of Functions

<Robot program change example>

Change the program example in "7.3.2 Writing a Sample Robot Program" as follows. The parts of the

program in the boxes are the locations changed.

10 ' The work grasping position P1, and the work placement position P2 must have been taught beforehand.

20 ' Example: P0=(+250.000,+350.000,+500,000,-180.000,+0.000,+0.000)(7,0)

30 ' P1=(+500.000, +0.000, +300,000, -180.000, +0.000, +10.000)(7,0)

40 ' P2=(+300.000,+400.00,+300.000,-180.000,+0.000,+90.000)(7,0)

50 DIM MV(30,3)

60 IF M_NVOPEN(1)<>1 THEN 'If vision sensor number 1 logon is not complete

70 NVOPEN “COM2:” AS #1 'Connects with the vision sensor connected to COM2.

80 ENDIF

90 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

100 NVPST #1,"Job2","E76","S81","S81",2,10 'Starts the [Job2] vision program and

110 'receives the recognized count from cell [E76] and the recognized coordinates from cell

[S81] and stores them in C_NVS1().

120 MOV P0 ' Moves to the evacuation point.

130 IF M_NVNUM(1)=0 THEN *NG ' If the detection count is 0, jump to error

140 MROW=1 ' Clears the acquired cell row.

150 MROW=1 ' Clears the recognized work number.

160 MKOSU=0 ' Specifies the work information (X, Y, C)

170 MST=1 ' Clears the head position extracting characters from the text string.

180 MED=1 ' Clears the tail position from which characters were extracted from the text string the previous time.

190 MFLG1=0 ' Clears the flag showing that all the information has been acquired.

200 WHILE MFLG1=0 ' Loops until all the information is acquired.

210 CGET$=C_NVS1(MROW,1) ' Acquires cell information.

220 MLEN=LEN(CGET$) ' Obtains the character count for the cell information acquired.

230 IF MLEN<>0 THEN ' When acquired cell information exists

240 FOR MPT=1 TO MLEN ' Loops for amount of cell information acquired

250 C1$=MID$(CGET$,MPT,1) ' Acquires cell information one cell at a time

260 IF C1$="," OR C1$="/" THEN ' If the acquired character is "," or "/"

270 C2$=MID$(CGET$,MPT-1,1) ' Acquires the information one character before a "," or "/".

280 MKOSU=MKOSU+1 ' Updates the work information

290 IF C2$<>"E" THEN MV(MNUM,MKOSU)=VAL(MID$(CGET$,MST,MPT-MED)) ELSE

MV(MNUM,MKOSU)=0

300 MST=MPT+1 ' Specifies the head position extracting text.

310 MED=MST ' Specifies the tail position from which text was extracted.

320 ENDIF

330 IF C1$="/" THEN ' If the acquired character is "/"

340 MNUM=MNUM+1 ' Updates the recognized work number.

350 MKOSU=0 ' Clears the work information.

360 ENDIF

370 NEXT MPT

380 MROW=MROW+1 ' Moves to the next cell information.

390 ELSE

400 MFLG1=1 ' All information acquisition is complete.

410 ENDIF

420 WEND

430 MFLG1=0 ' Clears all information acquisition flags.

440 FOR M1=1 TO M_NVNUM(1) ' Loops once for each detection by vision sensor 1.

450 P10=P1 ' Creates the target position P10 using the vision sensor 1 results data.

460 P10.X=MV(M1,1)

470 P10.Y=MV(M1,2)

480 MC=MV(M1,3)

490 P10.C=RAD(MC)

500 MOV P10,10 ' Moves to 10 mm above the work grasping position P10.

510 MVS P10 ' Moves to the work grasping position P10.

Vision program detailed explanation 9-134

9 Detailed Explanation of Functions

520 DLY 0.1 ' Wait time of 0.1 second

530 HCLOSE 1 'Closes hand 1.

540 DLY 0.2 ' Wait time of 0.2 second

550 MVS P10,10 ' Moves to 10 mm above the work grasping position P10.

560 MOV P2,10 ' Moves to 10 mm above the work placement position P2.

570 MOV P2 ' Moves to the work placement position P2.

580 DLY 0.1 ' Wait time of 0.1 second

590 HOPEN 1 'Opens hand 1.

600 DLY 0.2 ' Wait time of 0.2 second

610 MVS P2,10 ' Moves to 10 mm above the work placement position P2.

620 NEXT ' Repeats.

630 HLT ' Program pause (Create the appropriate processing.)

640 END ' Exit

650 '

660 *NG ' Error processing

670 ERROR 9000 ' Error No. 9000 output

680 HLT ' Program pause (Create the appropriate processing.)

690 END ' Exit

9-135 Vision program detailed explanation

9 Detailed Explanation of Functions

9.4.

Detailed explanation of systems combining multiple vision sensors and robots

The systems explained in Chapter 5 through Chapter 0 were systems with one vision sensor and one robot

controller.

With this system it is also possible to construct systems with one robot controller and up to seven vision sensors and systems with one vision sensor and up to three robot controllers.

This chapter explains the construction of these systems.

9.4.1.

Systems with one robot controller and multiple vision sensors

This section only explains those aspects of the setting method for constructing a system with one robot

controller and multiple vision sensors that are different from the contents covered in Chapter 5 through

Chapter 0.

(1) Change the robot controller communication settings.

From the MELFA-Vision menu, select [Controller] – [Communication Setting] to display the

"Communication Setting" screen.

Set the "Line and Device" and "Device List" for the number of vision sensors connected.

Below is an example for connecting three vision sensors.

Click the [Write] button to change the robot controller parameters.

Detailed explanation of systems combining multiple vision sensors and robots 9-136

9 Detailed Explanation of Functions

(2) Write the vision program for each vision sensor.

Log onto the connected vision sensors and write the vision program for each vision sensor. For details

on the logon method and vision program writing method, see "6.3 Work recognition test".

(3) Write the robot program to control multiple vision sensors.

Write a robot program like the following.

10 IF M_NVOPEN(1)<>1 THEN NVOPEN “COM2:” AS #1 'Connects to vision sensor 1 (COM2).

20 IF M_NVOPEN(2)<>1 THEN NVOPEN “COM3:” AS #2 'Connects to vision sensor 2 (COM3).

30 IF M_NVOPEN(2)<>1 THEN NVOPEN “COM4:” AS #3 'Connects to vision sensor 3 (COM4).

40 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

50 WAIT M_NVOPEN(2)=1 ' Connects with vision sensor number 2 and waits for logon to be completed.

60 WAIT M_NVOPEN(3)=1 ' Connects with vision sensor number 3 and waits for logon to be completed.

70 NVPST #1,“Job1”,“E76”,“J81”,“L85”,0,10 'Starts the vision program and acquires the results.

80 NVPST #2,“Job1”,“E76”,“J81”,“L85”,0,10 'Starts the vision program and acquires the results.

90 NVPST #3,“Job1”,“E76”,“J81”,“L85”,0,10 'Starts the vision program and acquires the results.

・・・・・・

・・・・・・ Operates the robot with the results received.

・・・・・・

200 NVCLOSE

9.4.2.

Systems with one vision sensor and multiple robot controllers

This section only explains those aspects of the setting method for constructing a system with one vision

sensor and multiple robot controllers that are different from the contents covered in Chapter 5 through

Chapter 0.

This section shows a system with two robots as an example.

(1) Write the vision program for two robots.

See "6.3.3 Image processing settings" and on the "Image Processing Method Selection" screen, select

the template for two robots and write the vision program.

When the template for two robots is selected, on the "Job Editing" screen [Processing Condition] tab, the calibration setting items for two robots are displayed.

(2) Execute the calibration work for two robots.

See "7.2 Calibration Setting" and execute the calibration work for two robots. When doing the work for

the second robot, change the "Robot datum" robot selection.

9-137 Detailed explanation of systems combining multiple vision sensors and robots

9 Detailed Explanation of Functions

(3) Set the calibration number.

Specify the calibration number for [Robot 1] and [Robot 2] in [Calibration No.] on the "Job Editing" screen [Processing Condition] tab.

(4) Write the robot program

<Robot 1>

10 IF M_NVOPEN(1)<>1 THEN NVOPEN “COM2:” AS #1 'Connects to vision sensor 1 (COM2).

20 WAIT M_NVOPEN(1)=1 ' Connects with vision sensor number 1 and waits for logon to be completed.

30 NVPST #1,“Job1”,“E76”,“J81”,“L85”,0,10 results.

40 M_OUT(10)=1

50 WAIT M_IN(10)=1

60 M_OUT(10)=0

Robot 2.

'Starts the vision program and acquires the

'Notifies Robot 2 that reception is possible.

'Checks that Robot 2 has received the notice.

' Switches Off the reception possible signal to

・・・・・・

・・・・・・Operates the robot with the results received.

・・・・・・

200 NVCLOSE

<Robot 2>

10 IF M_NVOPEN(1)<>1 THEN NVOPEN “COM2:” AS #1 'Connects to vision sensor 1 (COM2).

20 WAIT M_NVOPEN(1)=1 completed.

' Connects with vision sensor number 1 and waits for logon to be

30 WAIT M_IN(10)=1 'Waits for contact from Robot 1.

40 M_OUT(10)=1 'Outputs to Robot 1 that it received the notice.

50 WAIT M_IN(10)=0

60 M_OUT(10)=0

1.

70 NVIN #1,“Job1”,“E76”,“O81”,“Q85”,0,10

・・・・・・

'Checks that Robot 1 has verified reception.

' Switches Off the notice received signal to Robot

'Acquires the results.

・・・・・・Operates the robot with the results received.

・・・・・・

200 NVCLOSE

* Note that the cell positions for Robot 1 and Robot 2 to acquire data from the vision sensor are different.

Detailed explanation of systems combining multiple vision sensors and robots 9-138

10 Troubleshooting

10. Troubleshooting

This chapter lists the errors that can occur in using network vision sensors and explains the causes of and solutions to these errors.

10.1.

Error list

Below are the messages for error numbers, their causes, and the solutions.

The meanings of the error levels in the table are as follows.

Table 10-1 Error Category List

Level Explanation

H

High-level error

L

Low-level error

C

Warning

The servo is switched Off and program execution stopped.

Program execution is stopped.

Program execution is continued.

Table 10-2 List of Errors for Vision Sensor Use

Level

Error

No.

Error contents Cause Solution

L

L

3110 Argument out of range One of the argument values specified in a command is out of its range.

Check the argument range and re-input.

3120 Incorrect argument count The number of arguments in the executed command is incorrect.

Check the argument count and re-input.

L 3130 Attempt was made to open an already open communication file.

The communications line that Check the COM number and was the subject of the vision sensor number and attempted opening is already open. re-execute. Or check the communications parameters.

L 3141 The NVOPEN command is not executed.

No NVOPEN command was executed before execution of a command communicating with the vision sensor.

Revise the robot program to execute the NVOPEN command.

L

L

3142 The communication line can not be opened.

The line for communication with the vision sensor can not be opened.

3287 This command can not be used if the start used if the start condition is condition is ERR or ALW.

This command can not be

ERR or ALW.

Check the communication cable or the communications parameters.

Revise the program.

L

L

3810 Incorrect argument type The arithmetic calculation, Specify the correct argument.

single-item calculation, comparison, or function argument type is incorrect.

4220 Syntax error in input command

There is a mistake in the command.

Check the program contents, syntax.

L 4370 Array element mistake parameter setting

1. An array element is outside the defined range.

2. A variable was specified that can not be arrayed.

1. Revise the array elements to be within 1 – maximum element.

2. Stop array element specification.

The parameter setting is Check the NETHSTIP, incorrect. NETPORT, NETMODE, and other such parameters.

10-139 Error list

10 Troubleshooting

Level Error No.

L

L

8601

8602

Table 10-3 List of Errors Only for Vision Sensors

Error contents connected

Logon not possible

Wrong password

Cause Solution connected to the specified

COM number.

Check the specified vision program number, "COMDEV" parameter, etc. settings.

The communication line was Reset the program and start it opened, but there is no again. response from the vision sensor.

The password for the user set with the "NVUSER" password is not set in the "NVPSWD" parameter.

Set the correct password.

L 8603 Parameter abnormality The user name or password parameter is abnormal. communications

Communication with the vision sensor was cut off before or during command execution.

Check the NVUSER and

NVPSWD parameters.

Check the communication cable between the robot and vision sensor.

L 8620 Abnormal vision sensor number specification

L 8621 Abnormal vision program name

The specified vision sensor number is not defined with an

NVOPEN command.

Check that the specified vision sensor number is correct.

Also, check that that number is defined with an NVOPEN command.

The specified vision program name is more than 15 with no more than 15 characters.

Specify a vision program name characters.

L 8631 present. recognition count cell

Specified cell value out of range timeout not exist in the specified vision sensor.

Check whether the specified vision program exists in the specified vision sensor. Also check that the vision program name specified is correct.

Check that the correct cell is specified. was not in the cell specified as the recognition count cell.

Corresponding to either the following.

The values specified for the start cell and end cell are reversed.

The range specified by Start

Cell and End Cell exceeds line

30 and row 10.

The number of data included in the cell which specifies it by

Start Cell and End Cell exceeds 90.

Check that the correct cell is specified.

Check the number of data acquired from the cell which specifies it by

Start Cell and End Cell.

Check that the specified time vision sensor within the is correct. Or check that the specified time or within a specific time. vision sensor settings are correct.

L

L

8633 NVTRG response timeout No response to image capture request.

8634 There is a comma within the specified range of the cell.

There is a comma on the cell which specifies it for Start Cell and End Cell though the range from 1 to 3 is specified for a value of Type.

Check the communications cable.

Check the value set to Type or

Start Cell and End Cell.

Error list 10-140

10 Troubleshooting

L 8635 There is no comma within the specified range of the cell.

There is no comma on the cell which specifies it for Start Cell and End Cell though the range from 4 to 7 is specified for a value of Type.

Check the value set to Type or

Start Cell and End Cell. specification specification is other than specification of "camera",

L

L

8650 Put online.

"manual".

The vision sensor is offline.

8660 Not permitted to control vision sensor

The NVUSER and NVPSWD parameters set for logging on to the vision sensor do not have the right to full access to the vision sensor.

L 8670 After the program was stopped, it was started without being reset.

Put the vision sensor online to enable control from the outside.

Check the vision sensor side user list registration and specify the name of a user with full access in NVUSER and their password in

NVPSWD.

Reset the robot program, then start it.

10-141 Error list

11 Appendix

11. Appendix

11.1.

Performance of this product (comparison with built-in type RZ511 vision sensor)

Below is a comparison of the performance of this product with that of our built-in type vision sensors.

11.1.1.

Comparison of work recognition rate

(1) Comparative results by work shape and conditions

Table 11-1 Comparative Results by Work Shape and Conditions

Work condition

Overlap

Network vision sensor

Built-in vision sensor

×

Approach or contact

Tilt

×

Front/rear judgment

○ △

O: Recognition with pretty much no problems possible

: Recognition possible under some conditions ×: Recognition almost impossible

With network vision sensors, it is possible to recognize overlapping work and work nearly in contact or in contact, work that is difficult to recognize with the built-in vision sensors.

Network vision sensors also improve the recognition rate for tilted work and front/rear work.

(2) Comparison of functions for recognition pattern registration

Table 11-2 Comparison of Functions for Recognition Pattern Registration

Function Network vision sensor Built-in vision sensor

Area size change

Specification of coordinates for output to robot

Yes Yes

Yes Yes

Area angle change

Area shape change

Yes

(No need to change the work angle)

Yes

(Square/fan shape/round)

No

(Necessary to change the work angle)

No

(Square only)

Network vision sensors improve the work pattern registration functions.

It is possible to change the area angle without any need to change the work angle and the area shape can be changed to square, fan shaped or circle.

11.1.2.

Comparison of image processing capacity

Table 11-3 Image Processing Capacity

4

Network vision sensor

10 30 4

Built-in vision sensor

10 30

Image processing time

Data transfer time

Total time

276ms 278 ms 367 ms 794ms 946ms

121 ms 142 ms 249 ms 80ms 99ms

946ms

127ms

397 ms 420 ms 616 ms 874ms 1045ms 1073ms

"Table 11-3 Image Processing Capacity" shows the measurement results when the work is recognized

with the same conditions.

The image processing time can be reduced by using network vision sensors.

For network vision sensors, the increase in the number of pieces of work recognized increases the data transfer time.

Performance of this product (comparison with built-in type RZ511 vision sensor) 11-142

11 Appendix

11.1.3.

Factors affecting the processing time

Factors affecting the processing time

(1) Delay in communication time by hub and communication time when no hub used

There is almost no difference in communication time due to a hub.

There is no problem with any hubs, but when an old hub is used, there is a possibility of some variation in communication time.

There is no particular difference in the communication time when connecting directly with a cross-cable without using a hub.

(2) About the affect of other equipment connected to the network

When equipment other than the network vision sensor, robot controller, or monitor PC is connected to the network, the communication time may become longer.

Even when a program operates that communicates using a network with a monitor PC connected to the network, the communication time may become longer.

11-143 Performance of this product (comparison with built-in type RZ511 vision sensor)

11 Appendix

11.2.

Calibration No. marking sheet

This is a marking sheet used in calibration work. Enlarge or reduce it as necessary to match the size of the field of vision of the image.

Calibration No. marking sheet 11-144

HEAD OFFICE : MITSUBISHI DENKI BLDG MARUNOUCHI TOKYO 100-8310

NAGOYA WORKS : 1-14, YADA-MINAMI 5, HIGASHI-KU, NAGOYA, JAPAN

Oct.2009 MSW-BFP-A8779 Printed in Japan on recycled paper. Specifications are subject to change without notice.

MITSUBISHI ELECTRIC

HEADQUARTERS

MITSUBISHI ELECTRIC EUROPE B.V.

German Branch

Gothaer Straße 8

D-40880 Ratingen

Phone: +49 (0)2102 / 486-0

Fax: +49 (0)2102 / 486-1120

EUROPE

MITSUBISHI ELECTRIC EUROPE B.V.-org.sl.

Czech Branch

Avenir Business Park, Radlická 714/113a

CZ-158 00 Praha 5

Phone: +420 - 251 551 470

Fax: +420 (0)251-551-471

CZECHREP.

FRANCE MITSUBISHI ELECTRIC EUROPE B.V.

French Branch

25, Boulevard des Bouvets

F-92741 Nanterre Cedex

Phone: +33 (0)1 / 55 68 55 68

Fax: +33 (0)1 / 55 68 57 57

MITSUBISHI ELECTRIC EUROPE B.V.

Irish Branch

Westgate Business Park, Ballymount

IRL-Dublin 24

Phone: +353 (0)1 4198800

Fax: +353 (0)1 4198890

IRELAND

MITSUBISHI ELECTRIC EUROPE B.V.

Italian Branch

Viale Colleoni 7

I-20041 Agrate Brianza (MB)

Phone: +39 039 / 60 53 1

Fax: +39 039 / 60 53 312

MITSUBISHI ELECTRIC EUROPE B.V.

Poland Branch

Krakowska 50

PL-32-083 Balice

Phone: +48 (0)12 / 630 47 00

Fax: +48 (0)12 / 630 47 01

ITALY

POLAND

MITSUBISHI ELECTRIC EUROPE B.V.

52, bld. 3 Kosmodamianskaya nab 8 floor

RU-115054 Мoscow

Phone: +7 495 721-2070

Fax: +7 495 721-2071

MITSUBISHI ELECTRIC CORPORATION

Office Tower “Z” 14 F

8-12,1 chome, Harumi Chuo-Ku

Tokyo 104-6212

Phone: +81 3 622 160 60

Fax: +81 3 622 160 75

MITSUBISHI ELECTRIC AUTOMATION, Inc.

500 Corporate Woods Parkway

Vernon Hills, IL 60061

Phone: +1 847 478 21 00

Fax: +1 847 478 22 53

RUSSIA

MITSUBISHI ELECTRIC EUROPE B.V.

Spanish Branch

Carretera de Rubí 76-80

SPAIN

E-08190 Sant Cugat del Vallés (Barcelona)

Phone: 902 131121 // +34 935653131

Fax: +34 935891579

MITSUBISHI ELECTRIC EUROPE B.V.

UK Branch

Travellers Lane

UK-Hatfield, Herts. AL10 8XB

Phone: +44 (0)1707 / 27 61 00

Fax: +44 (0)1707 / 27 86 95

UK

JAPAN

USA

EUROPEAN REPRESENTATIVES

GEVA

Wiener Straße 89

AT-2500 Baden

Phone: +43 (0)2252 / 85 55 20

Fax: +43 (0)2252 / 488 60

Koning & Hartman b.v.

Woluwelaan 31

BE-1800 Vilvoorde

Phone: +32 (0)2 / 257 02 40

Fax: +32 (0)2 / 257 02 49

AUSTRIA

BELGIUM

INEA RBT d.o.o.

Aleja Lipa 56

BOSNIA AND HERZEGOVINA

BA-71000 Sarajevo

Phone: +387 (0)33 / 921 164

Fax: +387 (0)33 / 524 539

AKHNATON

4, Andrei Ljapchev Blvd., PO Box 21

BG-1756 Sofia

Phone: +359 (0)2 / 817 6000

Fax: +359 (0)2 / 97 44 06 1

BULGARIA

AutoCont C.S. s.r.o.

Technologická 374/6

CZ-708 00 Ostrava-Pustkovec

Phone: +420 595 691 150

Fax: +420 595 691 199

CZECH REPUBLIC

DENMARK Beijer Electronics A/S

Lykkegårdsvej 17

DK-4000 Roskilde

Phone: +45 (0)46/ 75 76 66

Fax: +45 (0)46 / 75 56 26

Beijer Electronics OY

Peltoie 37

FIN-28400 Ulvila

Phone: +358 (0)207 / 463 540

Fax: +358 (0)207 / 463 541

FINLAND

UTECO

5, Mavrogenous Str.

GR-18542 Piraeus

Phone: +30 211 / 1206 900

Fax: +30 211 / 1206 999

AXICONT AUTOMATIKA Kft.

(ROBOT CENTER) Reitter F. U. 132

HU-1131 Budapest

Phone: +36 1 / 412-0882

Fax: +36 1 / 412-0883

ALFATRADE Ltd.

99, Paola Hill

Malta- Paola PLA 1702

Phone: +356 (0)21 / 697 816

Fax: +356 (0)21 / 697 817

HIFLEX AUTOM.TECHNIEK B.V.

Wolweverstraat 22

NL-2984 CD Ridderkerk

Phone: +31 (0)180 – 46 60 04

Fax: +31 (0)180 – 44 23 55

GREECE

HUNGARY

MALTA

NETHERLANDS

EUROPEAN REPRESENTATIVES

Koning & Hartman b.v.

Haarlerbergweg 21-23

NL-1101 CH Amsterdam

Phone: +31 (0)20 / 587 76 00

Fax: +31 (0)20 / 587 76 05

Beijer Electronics AS

Postboks 487

NO-3002 Drammen

Phone: +47 (0)32 / 24 30 00

Fax: +47 (0)32 / 84 85 77

Fonseca S.A.

R. João Francisco do Casal 87/89

PT - 3801-997 Aveiro, Esgueira

Phone: +351 (0)234 / 303 900

Fax: +351 (0)234 / 303 910

SIRIUS TRADING & SERVICES SRL

Aleea Lacul Morii Nr. 3

RO-060841 Bucuresti, Sector 6

Phone: +40 (0)21 / 430 40 06

Fax: +40 (0)21 / 430 40 02

INEA RBT d.o.o.

Izletnicka 10

SER-113000 Smederevo

Phone: +381 (0)26 / 615 401

Fax: +381 (0)26 / 615 401

SIMAP s.r.o.

Jána Derku 1671

SK-911 01 Trencín

Phone: +421 (0)32 743 04 72

Fax: +421 (0)32 743 75 20

PROCONT, spol. s r.o. Prešov

Kúpelná 1/A

SK-080 01 Prešov

Phone: +421 (0)51 7580 611

Fax: +421 (0)51 7580 650

NETHERLANDS

NORWAY

PORTUGAL

ROMANIA

SERBIA

SLOVAKIA

SLOVAKIA

INEA RBT d.o.o.

Stegne 11

SI-1000 Ljubljana

Phone: +386 (0)1 / 513 8116

Fax: +386 (0)1 / 513 8170

Beijer Electronics Automation AB

Box 426

SE-20124 Malmö

Phone: +46 (0)40 / 35 86 00

Fax: +46 (0)40 / 93 23 01

Robotronic AG

Schlachthofstrasse 8

CH-8406 Winterthur

Phone: +41 (0)52 / 267 02 00

Fax: +41 (0)52 / 267 02 01

SLOVENIA

SWEDEN

SWITZERLAND

GTS

Bayraktar Bulvari Nutuk Sok. No:5

TURKEY

TR-34775 Yukarı Dudullu-Ümraniye-İSTANBUL

Phone: +90 (0)216 526 39 90

Fax: +90 (0)216 526 3995

UKRAINE CSC Automation Ltd.

4-B, M. Raskovoyi St.

UA-02660 Kiev

Phone: +380 (0)44 / 494 33 55

Fax: +380 (0)44 / 494-33-66

MIDDLE EAST REPRESENTATIVE

I.C. SYSTEMS LTD.

23 Al-Saad-Al-Alee St.

EG-Sarayat, Maadi, Cairo

Phone: +20 (0) 2 / 235 98 548

Fax: +20 (0) 2 / 235 96 625

ILAN & GAVISH Ltd.

24 Shenkar St., Kiryat Arie

IL-49001 Petah-Tiqva

Phone: +972 (0)3 / 922 18 24

Fax: +972 (0)3 / 924 0761

EGYPT

ISRAEL

AFRICAN REPRESENTATIVE

CBI Ltd.

Private Bag 2016

ZA-1600 Isando

Phone: + 27 (0)11 / 977 0770

Fax: + 27 (0)11 / 977 0761

SOUTH AFRICA

Mitsubishi Electric Europe B.V. /// FA - European Business Group /// Gothaer Straße 8 /// D-40880 Ratingen /// Germany

Tel.: +49(0)2102-4860 /// Fax: +49(0)2102-4861120 /// [email protected] /// www.mitsubishi-automation.com

Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement