Perspective display device for displaying and manipulating 2

Perspective display device for displaying and manipulating 2
United States Patent [191
Patent Number:
Date of Patent:
[11]
[45]
Hamada et a1.
[54] PERSPECTIVE DISPLAY DEVICE FOR
DISPLAYING AND MANIPULATING 2-D OR
3-D CURSOR, 3-D OBJECT AND
ASSOCIATED MARK POSITION
4,987,527
Jan. 22, 1991
FOREIGN PATENT DOCUMENTS
Ikuo Takeuchi, all of Ibaraki; Yuriko
0213316
0097409
0079589
0177578
0199108
3/1987
5/1985
4/1986
8/1986
9/1986
European Pat. Off. .......... .. 340/709
Japan .
Japan .
Japan .
Japan .
Watanabe, Tokyo, all of Japan
0114022
5/1987
Japan ................................. .. 340/709
[75] Inventors: Tomoyuki Hamada; Kohzi Kamezima;
173] Assignee: Hitachi, Ltd., Tokyo, Japan
[21] Appl. No.: 261,498
LOGICADD TM User's Manual, Generic CADD T
[22] Filed:
M —Version 3.0, Generic Software, Inc. Redmond,
[30]
OTHER PUBLICATIONS
Oct. 24, 1988
Wash., Apr. 1987, 4-114-116, 4-123, 4-126-7, 4-133.
Foreign Application Priority Data
Oct. 26, 1987 [JP]
Japan
.
Orr, J. N. “Upwardly Mobile CADD”, PC Magazine,
vol. 6, No. 21, Dec. 8, 1987, 93.
Primary Examiner-Clark A. Jablon
Attorney, Agent, or Firm-Antonelli, Terry, Stout &
Kraus
.... .. 62-268167
Oct. 26, 1937 [JP]
Japan ..
Oct.26, 1987 [JP]
Japan .............................. .. 62-269834
.... .. 62-269833
[51]
Int. Cl.5 .................... .. G05B 19/18; G05B 19/42;
[52]
U.S. Cl. ............................. .. 364/167.01; 340/709;
364/522
A manual operating system for manually operating a
[58]
Field of Search ................. .. 340/709; 364/ 167.01,
pattern displayed on a screen of a graphics display de
_ vice, which is included in the manual operating system
G05B 19/405; G06F 3/033
_
[56]
[57]
364/521, 522
so as to display generated patterns comprises perspec
tive projection means for displaying the cursor, object
and mark in perspective projection on the basis of infor
References Cited
U.S. PATENT DOCUMENTS
mation supplied from cursor information storage means
and object information storage means.
340/709
4,698,625 10/1987
McCaskill et a1. ....... ..
4,791,478 12/1988
Tredwell at al. ..
. 340/709 X
4,812,829
3/1989
Ebina et a1.
. . . ..
4,835,528
5/ 1989
Flinchbaugh ..................... .. 340/709
... .....
ABSTRACT
340/709
8 Claims, 8 Drawing Sheets
4
PERSPECTIVE PROJECTION /9 I
DtSPLAY MEANS
383155
DEVICE
]
r12
DISPLAY
8
(
CHANGING
'8M W50"
I
l
MEANS
7
CURSORn
posmou
INPUT
DEVICE
['0
<"
aurrou
INPUT
DEVICE
JUDGING MEANS
,__1
5
6
CHARACTER
PCTION INTERPRETER
1
‘55%.;
63
ACTION
.
"*5
2h
KNOWLEDGE
g5.
>2
0,,
I
Dr ,30!
’
{121115665
24
.
‘302
US. Patent
Jan. 22, 1991
4,987,527
Sheet 2 of8
F I G.
2001
2002
\
\
2
2005
\
POSITION OF
CURSOR
COORDINATES
OF POINTS
LINKS OF
POINTS
(10,20,35)
P1(5,5,O)
‘PEI-5,5,0)
P|—P2
P2- P3
P3(-E‘>,—5,O)
E
F I G.
3
30m /
E
5%
3003
PI
5002
4
US. Patent
Jan. 22, 1991
Sheet 3 of 8
FIG.
4,987,527
4
@250
2l4O
FLOOR
2|4|
POSITIONAND
POSTURE
COORDINATES
LINK
A3
,
2142
ROM 2
OF PoINTS
I_INKS OF
POINTS
2 '20
BODY
2l2 I
POS'T'OQIUR
A2 j
E
2122
\ I_INK
FLOOR,ON»
AND P08
MARK
PoSITIoN
COORD'NATES
OF POINTS
I_INKS OF
PoINTS
MARK
o T
B L
/2l30
213]
PoSITIoN
ZNOISJ’IRSDSNTURE A4
FLOOR /2132
LINK
ON
OOORDINATES
2H0
"
CAP
2III
T
OF PoINTS
??g'pgg?m
I_INKs OF
AI
/ 2| I2
BODY
PoINTS
2| [3
UNK
MARK
I
POSITION
2! i 4 ‘OF PoINTS P2I-20.3>0.20
,
ATTAC’HED
\QDORDINATESPHZOjQZq
2H5
\LINKS OF
PI-P2
PoINTS
1
\MARK
POSITION IIO,IO.45)
US. Patent
‘Jan.22, 1991
Sheet 5 of 8
FIG.
4,987,527
6
DISPLAY ROBOT, CURSOR AND OBJECTS
COPY WORK SPACE MAP INTO STACK
[32
I
-j
INPUT ACTION
NAME
I
DISPLAY AND SELECT REFERENCE POINT
/33
34
f
MARKS, AND INPUT PATH POINTS
PRODUCE ACTION
I
I
RESET WORK SPACE
USING STACK
MESSAGE
MAP BY
/36
GENERATE MOTION
I
DATA FROM
ACTION MESSAGE
RESET WORK SPACE
I
MAP BY
USING STACK
PERFORM
I
DYNAMICS SIMULATION
/39
I
4h
RESET
SPACE
WORK
MAP
Y
ES
BY USING STACK
__—__J
J42
DRIVE ROBOT MECHANISM
NO @
43
YES
US. Patent
Jan. 22, 1991
F|G.7
Sheet 6 of 8
'
mm‘
3O2Qé3040
m
K?
FIG.8
I
4,987,527
3
5030
300'
@[email protected]\3OBO
3020A’?
FIG. 9
FIG. IO
{>020
US. Patent
Jan. 2, 1991
Sheet 7 of 8
4,987,527
FIG. II
CARRY =SPECIFY GRASP POINT
SPECIFY PATH
SPECIFY PUT POINT
END
CARPYIA. P. B) = TAKEIA) —-PATH(P)——PUT(B)-'=END
TAKE(A)
PUTIA)
;
=
GRASPM)
END
TDETACHIA)‘
=Winn
ON(A)j—END
ATTACH(A)
302
US. Patent
Jan. 22, 1991
Sheet 8 of 8
4,987,527
FIG.
5
5
wmRD.
m
m
ANH.8AM
Q
\
I
S
E
O
D
PAURY
ms
M[1/LE NE
mTM M
mLA Aim)
W.
NE wRR
T88 NG
JL
FIG. I3
CHANGEOVER
CAUSED BY
BUTTON INPUT
NCYN
HME
OTC
O
E
l
T
C
DIGWBDI| mpTSAPUCVTUWPWLTnUWlVIb
ENE
OPUNERl
OE|
[7
1
4,987,527
2
motions, motions obtained when the robot moves accu
rately in accordance with the motion data are repro
duced. In no systems, accurate reproduction of motions
PERSPECTIVE DISPLAY DEVICE FOR
DISPLAYING AND MANIPULATING 2-D OR 3-D
CURSOR, 3-D OBJECT AND ASSOCIATED MARK
POSITION
is performed with due regard to the effect of inertial
force caused by the weight of the robot and the weight
of the object held by the robot.
BACKGROUND OF THE INVENTION
processor.
Conventional methods for providing a robot with
motions will now be described. In the direct teaching
method, positions are memorized while moving an ac
tual robot by using a teaching box coupled to a control
SUMMARY OF THE INVENTION
An object of the present invention is to provide a
manual operating system capable of giving desired mo
tions in a short time by operation means which is simple
and easy to understand without the necessity of giving
accurate positions and detailed motions.
Another object of the present invention is to provide
a manual operating system capable of reproducing ac
curate motion of a robot with the effect of inertial force
included before actually moving the robot, and of man
device of the robot or a human hand, and the memo
ually operating the robot safely.v
The present invention relates to a manual operating
system for a locomotive body such as a robot, and in
particular to a manual operating system for providing a
locomotive body such as a robot with motions in a short
time by using simple manual operating highly intelligent
rizcd positions are successively read out to provide the
A further object of the present invention is to provide
robot with the motions. In an alternative method, mo
tions are described in the form of program by using a
robot language or the like, and a robot is moved by the
three-dimentionally arranged objects.
program. In the master-slave method, an arm for man
manual operating system for manually operating a pat
a manual operating system capable of easily selecting
In accordance with the present invention, therefore, a
ual operation (i.e., a master arm) is provided, and a
tern displayed on a screen of a graphics display device,
robot is moved by interlocking the master arm with the 25 which is included in the manual operating system so as
motion of the robot. In a manual operating system using
to display generated patterns, comprises cursor infor~
mation storage means for storing shape and 3-dimen
sional position information of a cursor, object informa
tion storage means for storing a 3-dimensional position
a computer, manual operation is performed by using a
robot displayed on a screen of the computer as the
master arm as described in JP-A-61-79589, or teaching
is performed with respect to a robot on the screen by
means of direct teaching as described in JP-A-6l
and a shape of an object to be selected as well as a mark
position corresponding to the object, and perspective
177578. Or a drawing of a working object is displayed
projection display means for displaying the cursor, the
object and the mark in perspective projection on the
basis
of information supplied from the above described
35
coordinate values or the like as described'in JP-A-6l
cursor information storage means and the above de
199108 and JP-A-97409.
scribed object information storage means and for dis
The above described conventional techniques have
playing the result on the above described graphics dis
problems as described below.
play device.
First of all, in the direct teaching method, a large
number of positions must be taught for providing a 40
BRIEF DESCRIPTION OF THE DRAWING
robot with one motion, and respective positions must be
FIG.
1 is an entire con?guration diagram showing an
given accurately, a large amount of labor and time
on a screen, and the position and posture are calculated
on the basis of a point on that drawing speci?ed by
embodiment of a system according to the present inven
being required for teaching.
I
In a method using a robot language, it is dif?cult to
imagine the actual motion of the robot because the
t1on.
FIG. 2 illustrates contents of cursor information stor
age means according to the present invention.
motion is described by using characters and numeric
FIG. 3 illustrates patterns displayed on a screen of a
values. In addition, the manual operator must be ac
quainted with grammar of the language, agreement of
the coordinate system, and transformation procedure of
the coordinate system. And high degree of knowledge
and consideration are required for providing desired
motions.
On the other hand, it is possible to provide a robot
with motions simply when the master-slave method is
display device;
50
FIG. 4 shows con?guration of a map according to the
present invention.
FIG. 5 is a ?owchart showing the procedure of selec
tion processing.
FIG. 6 is a flowchart illustrating input procedure of
an action directive according to the present invention.
used. Since motions of the human hand become motions 55
FIGS. 7 to 10 show screens of the graphics display
device changing with input of action directives.
of the robot as they are, however, failure is not permit
FIG. 11 illustrates contents of action knowledge.
ted. Further, when motions including precise position
ing are to be given, the manual operator must pay suffi
FIG. 12 is a con?guration diagram showing another
cient attention and must be skilled in manual operation.
example of a system according to the present invention.
'Many of systems in which robots are manually oper 60 FIG. 13 shows a change of cursor shape in the exam
ated by using computers have simulation function of
reproducing taught motions on the screen to make the
operator con?rm the motions. In many of such systems,
function of providing a robot with motions is obtained
by only replacing a robot or a master arm for teaching 65
ple shown in FIG. 12.
-
DESCRIPTION OF THE PREFERRED
EMBODIMENTS
Embodiments of the present invention will now be
with a robot on the screen. The problem that accurate
described by referring to the drawings.
positions and detailed motions must be given by some
means is not lightened. Further, in the simulation of
FIG. 1 shows the entire configuration of an embodi
ment of the present invention. In FIG. 1, numeral 1
3
4,987,527
denotes a locomotive object manually operated by this
system, and denotes a robot in this example.
Numeral 2 denotes a work space map. In the work
space map 2, structure information 201 of the robot
manually operated by this system, property information
202 of the robot, structure information 203 of objects
such as an object to be handled and obstacles existing in
the work space in which the robot works, and property
information 204 of those objects are stored. The work
space map 2 has a stack 205 as well. Numeral 3 denotes O
action knowledge. In the action knowledge 3, action
interpretation knowledge 301 and motion generation
knowledge 302 are stored.
Numeral 4 denotes a graphics display device. On the
basis of structure information 201 and 203 of the robot
4
For example, objects existing on the floor are supported
by the ?oor, and objects attached to other parts are
supported by those parts. Objects supporting respective
objects are described by links 2112, 2122 and 2132 as
shown in FIG. 4. The concrete position of a certain
object is derived by starting from the node of that ob
ject, tracing the links up to a root node 2150, and syn
thesizing all of position and posture parameters of ob
jects on the way.
If 4X4 matrixes A1 representing parallel transfer and
rotational transfer values are used as position and pos
ture parameters, matrix A representing the concrete
position and posture of the cap 2110 shown in FIG. 4 is
derived as
and objects, the graphics display device 4 displays the
robot and the objects as speci?ed by the work space
map 2 and the cursor in perspective projection seen
from a point of view.
Numeral 5 denotes an action interpreter. According
to an action name speci?ed by using a character input
device 6, the action interpreter 5 derives a reference
point, which should be speci?ed by the operator in the
action directive, utilizing the action interpretation
knowledge 301 relating to the action stored in the action
knowledge 3. The action interpreter 5 derives the con
crete three-dimensional position coordinates of the ref
erence point by using the work space map 1 and suitably
Further, in each node, coordinates 2113 of points,
links 2114 of points, and mark position 2115 are stored.
The mark position 2115 may be calculated by using the
action knowledge 3 or may be derived from the prop
erty information 204 of the objects. The relation be
tween the position of an object and coordinates 2113
and links 2114 is similar to that of the cursor informa
tion, which relation is calculated from the above de
scribed links of objects and position and posture paramt
ers, and therefrom an object pattern such as 3002 shown
in FIG. 3 is formed. Further, the absolute coordinates of
displays the coordinates thus derived on the screen of
the display device 4. When an action directive is com 30 a mark 3003 are derived by adding the position of an
object to the value of the mark position 2115. On the
pleted, the action interpreter 5 delivers an action mes
basis of the above described position information and
sage corresponding to the action directive to a motion
generator 13.
The character input device 6 needs only supply a
command and an action name to the action interpreter
5. Therefore, the character input means 6 may be a
device such as a keyboard having buttons arranged
thereon or may be so con?gured that desired one may
be selected by position input means, which will be de
scribed later, out of a menu displayed on the screen of
point link, perspective projection display means 9
shown in FIG. 1 displays the shapes of the cursor 3001,
the object 3002 and its mark 3003, and generates pat
terns in perspective projection on the graphics display
device 4. However, the shape data 2002 and 2003 of the
cursor 3001 are used in substitution for the shape of the
mark 3003. When a button input is supplied from an
input device 11, judging means 10 compares the cursor
position 2001 and the absolute position of the mark
the display device 4. Numeral 7 denotes a three-dimen
sional position input device such as a three-dimensional
3003, ?nds out the nearest mark among marks existing
joystick, which continuously outputs displacements As,
in a certain distance from the cursor, and outputs an
object number corresponding to the nearest mark. Dis
Ay and A2 respectively of x, y and z axis directions in
the space in accordance with manual operation of the 45 play changing means 12 changes the color of the object
selected by the judging means 10 and demand the per
device. Numeral 8 denotes cursor information storage
spective projection display means to cause a blinking
means. In the cursor information storage means 8, the
display. At the same time, the display changing means
current three-dimensional position 2001 of the cursor
12 changes the display to make the cursor position coin
(hereafter referred to simply as cursor position), coordi
coordinate values in the three-dimensional space are
cide with the selected mark position of the object.
On the basis of the action message supplied by the
action interpreter 5, the motion 13 generates concrete
motion data capable of driving the robot by using the
work space map 2 and the motion generation knowl
obtained. By connecting points with segments of lines in
accordance with speci?ed links of points 2003, a pattern
edge 302.
A changeover switch 14 is provided for delivering
as illustrated in FIG. 3 such as a pattern 3001 is formed.
the above described motion data toward a simulator
when simulation carried out by a dynamics simulator 15
is needed before the actual robot 1 is moved by the
motion data.
nates of points 2002 which are shape data representing
a three-dimensional shape, and links of points 2003 are
stored. By adding values of the cursor position 2001 to
respective values of relative coordinates 2002, absolute
The value of the cursor position 2001 is continuously
rewritten by adding the output value of the position
input device 7 thereto. In the object structure informa
tion 203 included in the work space map 2, respective
objects are represented by nodes 2110, 2120 and 2130.
Information contained in U associated with each object
moment of inertia and the like included in the structure
information 201 and the property information 202 of the
in FIG. 4 is referred to as node. In the nodes 2110, 2120
robot stored in the work space map 2, the dynamics
By solving equations of motion by using the weight,
and 2130, positions of respective objects are described 65 simulator 15 simulates behavior of the robot, which is
by relative position and posture parameters 2111, 2121
obtained when the robot is moved in accordance with
and 2131 with respect to objects 2120 and 2140 respec
the motion data supplied from the motion generator 13,
tively supporting the objects represented by the nodes.
with due regard to dynamical factors and displays the
5
4,987,527
result on the screen of the display device 4 as motions of
the robot 3010.
The robot structure information 201 contains shapes,
position, postures and linkage information for each link
part of the robot 1 in the similar way to the objects
structure information 203. When the cursor position is
rewritten, the positions and postures in the robot struc
ture information 201 are also rewritten so that the grip
per position of the robot coincides with the cursor posi
6
In order to maintain the current state of the work space
map 2, contents of the structure information 201 of the
robot and the structure information 203 of objects are
copied into the stack region 205. The operator watches
this screen, determines action to be performed by the
robot 1, and inputs the action name by using the charac
ter input means 6. It is now assumed that the image
shown in FIG. 7 is displayed on the screen and the
action to be done by the robot 1 is to attach a bolt 3020
tion. Therefore, the robot displayed on the graphics 0 to a part 3030.
The operator inputs working name “CARRY” into
The operation of the manual operating system ac
the action interpreter 5 by using the character input
cording to the present invention will now be described.
means 4. The action interpreter 5 looks for description
First of all, manual operation of the cursor will now
311 relating to “CARRY” in the action interpretation
be described by referring to a ?owchart shown in FIG.
knowledge 301 as shown in FIG. 11, and displays the
5.
position of the grasp point of each object on the screen
When this selection method is called by main pro
of the display device 4 as the mark 3021 on the basis of
gram, the cursor position 2001 is initialized (S1). Dis
the description thus found (FIG. 8). This grasp point
placement of position is read from the position input
means that the object can be grasped at that point, and
device 7 (S2), and added to the cursor position 2001 (S3)
is derived from the property information 204 of objects.
and displayed in perspective projection by perspective
To be concrete, the grasp point can be derived by de
projection means 9 (S4). Here, the input of the button
scribing in the property information 204 of objects that
input device 11 is examined. If the input button is not
the bolt comprises a hexagonal pillar and a column and
depressed, the program returns to the step S1 to repeat
by preparing in the action knowledge 3 a rule that if
processing.
25 there is a polygonal pillar having a width smaller than
As a result, the cursor 3001 moves while it is being
the maximum opening width of the gripper of the robot
interlocked with the manual operation at the position
the bolt can be grasped at that polygonal pillar portion.
input device 7.
Or as the simplest method, the concrete position of the
If a button is depressed, the three-dimensional dis
grasp point may be described as the property informa
tances between the cursor position 2001 and respective
tion beforehand.
object mark positions are calculated (S6). If there is a
The operator makes the cursor 3001 approach a mark
distance not larger than a certain value D (S7), an object
located near the grasp point of the bolt by using the
corresponding to a mark having the smallest distance is
position input device 7 and gives a sign to the judging
found (S8), and the color of the display of the object is
means 10 by using a button on the position input device
changed.
35 11. At this time, the cursor 3001 need not completely
This is achieved by rewriting only a selected object
coincide with the mark. It is judged by the above de
with a different color (S9). In addition, the position of
scribed procedure that a mark located at the nearest
the cursor is aligned with the mark position of the se
position among marks located within a certain distance
lected object displayed (S10). At this step, the display
from the cursor 3001 when the sign is given has been
changing means may eliminate other marks to mark the
selected. Thereby the judging means 10 judges that the
selected object more distinct, and the result is displayed
operator has selected a bolt. The display changing
display moves interlocked with the cursor movement.
(S10).
means 12 makes the cursor 3001 coincide with the posi
In accordance with this manual operation of the cur—
sor, not only the object but also the cursor and the mark
tion of the grasp point and also eliminate other marks.
The display changing means 12 changes the color of the
are displayed in perspective projection. Since the three
dimensional shape of the cursor is the same as that of the
mark, it is possible to grasp whether the cursor is be
45 bolt to signify the operator that the bolt has been se
yond or this side of the mark by comparing the sizes of
lected. On the basis of the result of this selection, the
action interpreter 5 rewrites the structure information
201 and 203 of the robot and objects included in the
them on the screen. Even if the mark and the cursor are
work space map 2 so that the information may represent
not located at a completely identical position in judging 50 the state in which the bolt leaves the floor surface and
selection, selection can be easily performed by align
is held by the gripper of the robot. This is achieved by
ment of size comparison grade because a mark having
the shortest distance is- selected out of marks having
rewriting the link 2132 shown in FIG. 4 so as to replace
distances from the cursor which are not larger than a
“?oor, on” with “gripper, fixed” and rewriting the
position and posture parameter correspondingly. If the
certain value. Further, it is possible to con?rm that 55 cursor 3001 is thereafter moved on the screen, there
selection has been properly done from the fact that the
fore, not only the robot but also the bolt is displayed to
mark of the selected object coincides with the cursor
move with the gripper of the robot. Further, the action
and the display of the object is changed.
interpreter 5 writes into the structure information 201 of
The procedure for manually operating the robot by
robot the information that the robot has held the bolt.
using the system according to the present embodiment 60 Then the operator can specify the path, along which
will now be described concretely by referring to FIGS.
the bolt is to be carried, at a certain number of points.
6 to 10.
For the purpose of speci?cation, the operator moves the
First of all, the display device 4 displays objects 3020,
cursor 3001 to a suitable position and gives a sign by the
3030 and 3040 on the screen by using the structure
character input device 6 to the action interpreter 5.
information 203 of objects included in the work space 65 Thereby that position is registered as a path point (FIG.
map 2, and displays the robot 3010 and the cursor 3001
9). And the operator gives a sign meaning the comple
by using the position supplied from the position input
tion of the path point speci?cation to the action inter
means 7 and the structure information 201 of the robot.
preter 5 by the character input device 6, thus the specifi
7
4,987,527
8
cation of the path point being ?nished. If the operator
coordinate values is gererated from the action message
need not specify path points, the operator may give a
via particularization heretofore desribed.
The motion data generated by the motion generator
sign meaning completion of the path point at the begin
ning.
13 is temporarily stored in the motion generator 13, and
Lastly the action interpreter 5 derives “PUT
transferred to either the robot 1 or the simulator 15 via
the switch 14. If the operator wants to know whether
POINT” from the objects property information 204 on
the basis of the remaining description of the description
the generated motion data are proper or not, the opera
tor is able to send them to the simulator 15 and perform
311 and displays it with a mark 3041 on the screen. The
“PUT POINT” means a position where the bolt can be
dynamics simulation. By using the weight of the robot,
placed or mounted. On the basis of the robot structure
information 201, the action interpreter device 5 knows
its moment of inertia, the coef?cient of friction of a
revolute joint, and so on stored in the robot property
that the robot holds the bolt at that time. Therefore, the
action interpreter 5 derives only “PUT POINTS” relat
ing to the bolt out of “PUT POINTS” for other various
robot with motion data is numerically solved, and the
information 202, the equation of motion for moving the
The operator makes the cursor 3001 approach a
“PUT POINT" of the desired part and sends a sign to
result is displayed as motions of the robot 3010 on the
screen of the display device 4. In order to grasp the
movement of the object with the progress of motion in
this display as well, the work space map is restored to
the judging means 10 by the button input device 11. In
accordance with the same procedure as the foregoing
the state of the step 32 illustrated in FIG. 6 by the stack
205. With the progress of motion, the robot and object
one, the judging means 10 judges the mounting position.
The display changing means 12 aligns the bolt with that
position, and restores the bolt to the original color. The
action interpreter 5 rewrites the robot and objects struc
structure information 201 and 203 are rewritten.
objects and displays them (FIG. 10).
If there is no problem in the result of simulation, the
operator sets the switch 14 to the robot 1 side and
moves the actual robot 1 with the motion data stored in
the motion generator 13. If the result of simulation
indicates that the robot might coincide with an obstacle
on the way of carriage of the bolt 2030 to the position of
the part 3030 by the robot under the situation shown in
ture information 201 and 203 included in the work space
map 2 so as to represent the state in which the bolt
leaves the gripper and is ?xed at the mounting position.
With respect to a motion of the cursor 3001, therefore,
only the robot 3010 moves on the screen thereafter, and
FIGS. 7 to 10, for example, the operator gives modi?ed
the bolt is displayed ?xed at the mounting position.
When the processing heretofore described has been
?nished, the action interpreter 5 generates an action
message such as carry (obj 1, path, obj 2) by using the
directive of motion such as modi?ed speci?cation of
path points.
Since the actual object existing in the work space is
not always located at a position stored in the work
space map, it is necessary to correct discrepancy be
tween the position on the map and the actual position.
action name “CARRY”, a symbol “obj l“ bound to the
names of the bolt and the “GRASP POINT” a symbol
“obj 2” bound to the names of the part whereto the bolt
is to be attached and the “PUT POINT”, and a symbol
“PATH” bound to the path points. The action inter
preter 5 transfers the action message to the motion gen
erator 13. Upon receiving the action message from the
For the motions requiring precise alignment with re
spect to an object such as motions grasping the object,
for example, the correction just described is achieved
by introducing motions of approaching the object while
the distance from the object is being measured by means
action interpreter 5, the motion generator 13 restores
of a distance sensor in the breaking down process writ
the work space map to its state of step 32 shown in FIG.
ten in the motion generation knowledge 302 and by
6 by using the stack 205. The motion generator 13 looks
for description relating to “carry” in the motion genera
tion knowledge 302 as shown in FIG. 11, and breakes
adding information relating to the use of the sensor to
the motion data. When the robot actually moves, the
robot can move while the discrepancy with respect to
the map is being corrected by means of the sensor.
When abnormality is sensed by means of the sensor or
the like, during the course of the motion of the robot,
the motion is stopped, and that fact is displayed on the
screen of the display device 4 by means of characters or
the like.
down “carry (A, P, B)” into “take (A)”, “path (P)” and
“put (B)”. At this time, the symbols obj l and obj 2 are
substituted into the arguments A an B. The description
“take (A)” is further broken down into “grasp (A)”or
“detach (A)". Since it is known from the objects struc
ture information 203 that the bolt speci?ed by “obj 1” is
not attached to another object, however, “grasp (A)” is
chosen. As for “put (A)”, it is known from the objects
structure information 203 that “obj 2” is the position
whereat the bolt is mounted. Therefore, “attach (A)”,
which means a motion of attaching the bolt screwing,
rather than “put on (A)”, which means a motion of
simply placing the bolt on the object, is chosen. The
motion “grasp (A)” is further broken down into detailed
motions of moving the gripper to the vicinity of the
bolt, taking the gripper to the holding position slowly,
closing the gripper, and moving the gripper right
In the present embodiment, the operator need pro
vide neither concrete coordinate value data nor accu
rate position information to provide the robot with
55
motions. The operator need only specify principal
points of working by selecting reference points succes
sively displayed. Therefore, the operator can simply
provide the robot with motions in a very short time.
Further, motions can be con?rmed beforehand by using
simulation with due regard to dynamics, and hence the
safe manual operation is assured.
The operator can try various motions beforehand
above. In order to derive the position coordinates of the
while repeating observation of simulation results and
object depending upon a change caused in the object
amendments of motions depending thereupon. There
position with the progress of motion from the robot and 65 fore, the operator can ?nd motions meeting the wish of
objects structure information 201 and 203, the structure
the operator to the highest degree out of motions thus
information 201 and 203 is rewritten along the genera
tried. If the action message is preserved, it is possible to
tion of the motion. The motion data including detailed
give the motions to robots of various shapes with the
9
4,987,527
identical action message by only interchanging the
robot structure information 201.
By the motion of the robot 3010 on the screen inter
locked with the input of the position input device 7, the
opertor is able to intuitively know what posture the
robot assumes or what is the movable range depending
upon the position of the gripper. Information effeicient
in considering the action procedure can thus be given
10
every button depression on the button input device 111,
therefore, the constraint plane on which the cursor is
movable is changed over between the x-y plane and the
x-z plane, and at the same time shapes of the displayed
cursor and marks are changed as shown in FIG. 13.
Even if a two-dimensional position input device is used,
therefore, it is possible to change the position of the
cursor in a three-dimensional way. Further, since the
efficiently.
shape of the cursor changes whenever the constraint
Since motions can be simply given in a very short
time, the present embodiment can be used not only for
plane on which the cursor can move is changed over,
teaching a robot which performs repetitive working
moves in and spatially perceive the constraint plane.
such as an industrial robot but also manual operating a
robot in a construction ?eld in an interactive way.
Owing to the present invention, three-dimensionally
arranged objects can be selected by only moving the
the operator is able to know which direction the cursor
Since the arrangement of objects in the operation
cursor three-dimensionally displayed on the screen and
environment is displayed on the screen and the manual
operating system can be placed at a distance from the
robot, the present embodiment can also be used for
roughly matching the position and size of a cursor with
those of a mark affixed to an object. Therefore, a de
sired object can be easily selected, and the selection can
remote manual operation of a robot in an environment a
be made easy to understand.
person cannot enter such as an atomic power plant.
20
In the present invention system, the operator can see
Every time an action message is generated, motion
the movement of an object corresponding to a motion
data generation process is performed in the above de
of the robot. Therefore, the motion situation of the
robot including the movement of the object to be han
lated to some degree, however, motion data may be
dled can be accurately veri?ed. As a result, it is possible
generated together from them.
25 to obtain information which is efficient for the operator
Further if a place for preserving those action mes
to establish a plan of motions.
sages and a mechanism capable of editing and managing
Since the object to be handled moves in accordance
those action messages are prepared, motions of the
with motions of the robot, it is possible to con?rm on
robot can be edited and managed at the action level.
the screen during teaching whether interference be
Therefore, the manual operator can establish a motion 30 tween the object held by the gripper of the robot and
scribed embodiment. After action messages are accum
plan of a higher degree.
If it is permitted to put together some action messages
another such as an obstacle is present or not.
Even if a mistake is made on the way of teaching, it is
to de?ne a new action message, it is possible to simply
possible to try the teaching again from the state immedi
direct action of higher degree.
ately preceding the mistake without trying the whole
In the above described embodiment, the path for 35 from the start again owing to the function of the stack.
carrying an object is given simply as points. If these
Thereby it is possible to establish a motion plan in a trial
points are provided with an attribute that the path may
and error fashion.
pass through the vicinity of the point or an attribute that
In a map according to the present invention, the posi
the path must pass through the point accurately, for
tion of an object is represented by a treelike hierarchical
example, more complicated working directives can be
structure. By only rewriting the position and posture
given.
parameter A2 of body shown in FIG. 2, for example,
If the present system is used in construction of a plant,
not only the position and posture of body but also those
for example, and the plant is designed by using CAD, it
of cap placed on the body change. The movement of an
is also possible to derive information for the objects
object in the actual world can thus be represented ef?
structure information in the work space map 1 from 45
ciently.
data of that CAD. Further, if parts handled by the robot
are limited, it is also possible to make and modify the
work space map by a visual recognition device.
means for displaying patterns on a screen and operating
In a present invention system, it is also possible to use
patterns of objects displayed on the screen, said system
Z-dimensional position input device instead of 3-dimen
sional one by providing allotting means 110 between the
cursor information storage means 8 and the position
input device 7 as well as between the cursor information
storage means 8 and the perspective projection display
means 9 and providing a second button input device 111 55
different from the button input device 11 as shown in
FIG. 12.
The second button input device 111 is provided on
the position input device 7 in the same way as the ?rst
button input device 11. The allotting means 110 per
forms changeover to add displacements Ax and Ay
respectively in x and y axis directions supplied from the
position input device 7 either x and y coordinate values
or to x and z coordinate values, and performs change
over to specify the one of two types of shape data for 65
the cursor is used in displaying the cursor on the screen.
The changeover operation is performed whenever the
button input device 111 is supplied with input. For
We claim:
1. In an operation system having a graphic display
comprising:
position input means for producing displacements in
directions of X, Y and Z axes in a three-dimen
sional space;
cursor information storage means for storing three-di
mensional position data and shape data of a cursor
and rewriting data stored therein response to dis
placements produced by said input means;
object information storage means for storing three-di
mensional position data and shape data of objects
and storing position data of a mark which is pro
vided for each object to spatially designate a loca
tion of the object;
perspective converting projection display control
means for referring to three-dimensional position
data and shape data of the cursor stored in said
cursor information storage means, converting the
three-dimensional position data into converted
three-dimensional position data for use in perspec
11
4,987,527
tive projection of a converted cursor onto said
12
object information storage means for storing three-di
screeen and generating display signals to display
mensional position data and shape data of objects
the converted cursor at a converted position indi
data and with a size corresponding to a converted
and storing position data of a mark which is pro
vided for each object to spatially designate a loca
tion of the object;
position on said screen, said projection display
perspective converting projection display control
cated by the converted three~dimensional position
control means further refers to the three-dimen
means for referring to three~dimensional position
data and shape data of the cursor stored in said
cursor information storage means, converting the
sional position data and shape data of the objects
and the position data of the mark positions stored in
said object information storage means, converts the 10
three-dimensional position data into converted
three-dimensional position data and shape data of
the objects and the position data of the marks into
converted three-dimensional position data and
shape data of converted objects and converted
three-dimensional position data for use in perspec
tive projection of a converted cursor onto said
converted cursor at a converted position indicated
position data of converted marks for use in per
by the converted three-dimensional position data
screen and generating display signals to display the
spective projection of the converted objects and
and with a size corresponding to a converted posi
converted marks onto said screen and generating
tion on said screen, said projection display control
means further refers to the three-dimensional posi
tion data and shape data of the objects and the
position data of the mark positions stored in said
object information storage means, converts the
three-dimensional position data and data of the
objects and the position data of the marks into
converted three-dimensional position data and
shape data of converted objects and converted
position data of converted marks for use in per
display signals to display respective converted
objects and marks at converted positions and with
converted shapes and sizes corresponding to the
converted positions on said screen, and display
signals being outputted to said graphic display
means;
means for generating a trigger signal; and
judging means, responsive to the trigger signal, for
comparing current three-dimensional position data
25
of said cursor information storage means and cur
spective projection of the converted objects and
rent position data of a current mark of said object
information storage means in order to detect differ~
ences not larger than predetermined values to
thereby select an object corresponding to a mark
converted marks onto said screen and generating
with the detected differences;
converted positions on said screen, and display
display signals to display respective converted
objects and marks at converted positions and with
converted shapes and sizes corresponding to the
wherein said objects are to be operated by a robot;
and
said system further including action knowledge hav
ing a collection of robot operating steps of said
robot; and
signals being outputted to said graphic display
means;
35
action interpretation means for referring to said ac
tion knowledge and instructing. said object infor
mation storage means to display marks respectively
provided at only objects able to be selected by said
robot per each of said operating steps.
2. A system according to claim 1, further comprising
display changing means responsive to the object selec
tion of said judging means for applying an instruction 45
signal to change display of said selected object to said
projection display control means.
3. A system according to claim 1, wherein said object
means for generating a trigger signal;
judging means, responsive to the trigger signal, for
comparing current three-dimensional position data
of said cursor information storage means and cur
rent position data of a current mark of said object
information storage means in order to detect differ
ences not larger than predetermined values to
thereby select an object corresponding to a mark
with the detected differences;
an action interpreter for judging action intention of
an operator’s operation from an action name input
ted via character input means and operation of a
locomotive object responsive to said position input
means and displaced on said screen, for displaying
information storage means stores data of a robot stru
necessary reference points successively and for
ture; and
50
said projection display control means reads the robot
structure data and enables a perspective projection
display of the robot in a posture having a gripper
put at a position of said cursor and being changed
in interlocking relation with movements of said 55
generating an action message conforming to action
cursor to thereby cause easier identi?cation of
three-dimensional moving directions of said cursor.
4. In an operation system having a graphic display
means for displaying patterns on a screen and operating
patterns of objects displayed on the screen, said system
comprising:
position input means for producing displacements in
directions of X, Y and Z axes in a three~dimen
sional space;
cursor information storage means for storing three-di 65
mensional position data and shape data of a cursor
and rewriting data stored therein in response to
displacements produced by said input means;
intention;
a motion generator for converting the generated ac
tion message to motion data suitable for driving the
locomotive object; and
work space map means and action knowledge means
both for supplying information required by said
action interpreter and said motion generator.
5. A system according to claim 4, further comprising:
a changeover switch disposed on a way of delivery of
motion data from said motion generator to said
locomotive object; and
means for performing simulation by using a dynamics
simulator before actually driving the locomotive
object, for displaying the result of simulation on the
screen of said graphics display means, for con?rm
ing validity of the motion.
6. A system according to claim 5, wherein the loco
motive object is a robot for transporting goods.
13
4,987,527
14
7. A system according to claim 4, wherein the 1000-
map means in accordance with a treelike structure
motive object is a robot for transporting goods.
8. A system according to claim 4, wherein the posi-
formed’ by relative positions and postures of objects as
well as their linkage relations.
tion data of said objects are stored in said work space
*
10
20
25
30
35
45
55
65
"
*
*
*
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement