当前位置:首页 >> 机械/仪表 >>

在具有触觉的虚拟环境下机器人的示教操作技能


Journal of Advanced Manufacturing Systems Vol. 1, No. 1 (2002) 89–105 c World Scienti?c Publishing Company

TEACHING MANIPULATION SKILLS TO A ROBOT THROUGH A HAPTIC RENDERED VIRTUAL

ENVIRONMENT

Y. CHEN? and F. NAGHDY? School of Electrical, Computer and Telecommunication Engineering University of Wollongong, NSW, 2522, Australia Fax: +61 42 21 3236 ?Tel: +61 42 21 3398 ?Yc12@uow.edu.au ?f.naghdy@uow.edu.au

A new paradigm for programming of robotics manipulator to perform complex constrained motion tasks is being studied. The teaching of the manipulation skills to the machine starts by demonstrating those skills in a haptic-rendered virtual environment. Position and contact force and torque data generated in the virtual environment combined with a priori knowledge about the task is used to identify and learn the skills in the newly demonstrated tasks and then to reproduce them in the robotics system. The peg-in-hole insertion problem is used as a study case. The overall concept is described. The methodologies developed to build the virtual environment and to learn the basic skills are presented. Results obtained so far are provided. Keywords: Haptics; skill learning; virtual reality; peg-in-hole insertion.

1. Introduction Robots are widely used as an automation tool to improve productivity in industry. Force sensitive manipulation is a generic requirement for a large number of industrial tasks, especially those associated with assembly. For example, the automatic insertion of a “peg in the hole” is often taken as the benchmark assembly automation problem, as it concisely represents a constrained motion force sensitive manufacturing task with all the attendant issues of jamming, tight clearances, the need for quick insertion times, reliability, etc. One of the major factors preventing greater use of robots in assembly tasks to date has been the lack of availability of fast reliable methods of programming robots to carry out such tasks. Hence, robots have in practice been unable to economically replicate the complex force and torque sensitive capability of human operators. In this work, a new paradigm for programming of robotics manipulator is being developed. It is intended that the teaching of the machine will begin with the necessary skills being demonstrated by the human operator in a virtual environment with tactile sensing (haptics). Position and contact force and torque data generated
89

90

Y. Chen & F. Naghdy

in the virtual environment combined with a priori knowledge about the task will be used to identify and learn the skills in the newly demonstrated tasks and then to reproduce them in the robotics system. The use of the virtual environment will simplify the process, as the training data will be directly extracted from the haptic system.1 This is also a novel approach to machine training enabling this research to take advantage of recent developments in virtual reality and computer simulation. The developed approach is studied in the context of the peg-in-hole insertion that represents a large number of assembly tasks. However, the algorithms developed for the system are as generic as possible. They identify the basic manipulation skills performed in an automatic assembly and translate them to trajectories and task schedules for a particular application and robotics manipulator. In this following paper, the design of the virtual environment will be described and the skill learning algorithm will be introduced. 2. Backgrounds Robotics manipulators have been primarily employed to perform a particular task through programming. The programming methods developed so far can be grouped into four categories of text programming, o?-line simulation-based programming, inductive learning and teaching by guiding. Text programming can be applied to complex applications, but the development time is long and special skills and much e?ort are required to produce a complete program. This has resulted in the development of task level robot languages.2,3 But generic task level languages have proved to be quite intensive in code and computing time. O?-line simulation-based methods usually integrate text-programming and model-based motion planners in one common platform.4,5 The approach is powerful but requires special hardware and a complete description of the real world, both of which are costly. In inductive learning, a robot arm masters appropriate motion and sensing strategies through trial and error.6 This is an e?ective method when it is used to re?ne other programming methods. In “teaching by guiding” a human operator drives the robotics arm in the real world to perform the task while the characteristics of the motion are recorded. In spite of its simplicity, the method is not generic, ?exible or robust, and is not applicable to complex tasks. It cannot accommodate extensive sensory interaction and can be dangerous for the operator. There have been a number of attempts to overcome some of the shortcomings of “teaching by guiding” approach. Summers and Grossman7 embedded the collection of the sensory information and interaction with the operator in the task instruction procedure. Asada and Assari8 extracted the control rules to perform a particular assembly motion from the position and force data generated during operation of a human operator. Sator and Hirai9 integrated direct teaching with task level languages through master-slave manipulators.

Teaching Manipulation Skills to a Robot

91

The concept of “teaching by showing” has been another extension of “teaching by guiding”, in which a robotics system learns a particular task by watching a human operator performing it. The learning methodologies were initially developed for computer scene understanding,10 and automatic perception of actions.11,12 Some recent developments have signi?cantly advanced the “teaching by showing” approach in programming a robotics manipulator. Ikeuchi and Sehiro developed a system that could extract ?ne motion sequence from transitions of face contact states obtained by a range sensor.13 Haas integrated a symbolic recognizer and play back module using a visual servo for two-dimensional pick and place operations.14 Yamada and Uchiyama conducted a study to determine essential features of human physical skills based on multi-sensory data and the possibility of transferring them to robots by focusing on two tasks of crank rotation and side matching.15 Kuniyoshi et al. developed a robotics system that could learn reusable task plans in real time by watching a human performing assembly tasks.16 The method was based on visual recognition and analysis of human action sequences. The e?ectiveness of the method was demonstrated for a block assembly task. Direct transfer of skills from a human operator to a machine in an interactive environment has been the next stage in the programming and training of a robotics system. In the ?eld of mobile robots, Pomerleau used a 3 layer perceptron network to control the CMU ALVINN autonomous vehicle.17 Grudic and Lawrence used an approximation method as a means for creating the robot’s mapping from sensor inputs to actuator outputs in transfer of skills to a mobile robot.18 In acquisition of manipulation skills, particularly in constrained motion, the work carried out by Kaiser and Dillman19 is of signi?cance. The work proposes a general approach to the acquisition of sensor-based robot skills from human demonstration. An adaptation method is also proposed to optimize the operation with respect to the manipulator. The method is validated for two manipulation tasks of the peg-in-hole insertion and opening a door. Handleman and Lane20 have carried out some preliminary work on a knowledgebased “tell” approach to describe the task to be carried out by the robot and the corrective control measures to be taken up. The task is de?ned by a rule-based goal directed strategy. The proposed method has been veri?ed through computer simulation only for a typical peg-in-hole insertion problem. The development of the rulebased system has been intuitive and rather complicated. The developed rules are very much context based and have to be built from scratch for any new application. 3. Human and Machine Motor Learning This work aims at emulating human skill learning in machine. In addition to selfdiscovery, learning of skills in humans generally takes place through training by an instructor21 in the psychomotor domain, where “motor” is an observable movement response to a stimulus.22 According to Smith and Smith,23 there are three types of

92

Y. Chen & F. Naghdy

movements. The ?rst is the postural movement which regulates body positioning. The second is locomotor movements, which translate and rotate a body, and the third category includes manipulative movements. The manipulative movements are the focus of this study as they are the type of movements emulated by robotics manipulators in automatic assembly. Simpson24 has proposed a model with seven hierarchical classi?cation levels for the human behaviour in the psychomotor domain. The model was developed as a taxonomy and consists of (a) (b) (c) (d) (e) Perception, which deals with sensory stimulation, cue selection and translation. Set, which deals with mental, physical and emotional sets. Guided response, which deals with imitation and trial and error learning. Mechanism, which deals with the mechanics and habituation of movements. Complex overt response, which represents a higher level of skill in performing the motor skill by the learner. (f) Adaptation, which adapts the skills to variation in the environment and task. (g) Orientation, which represents automatic behaviour.

The ?rst two levels do not represent easily observable behaviour. The next three levels represent a learning sequence inherent in many motor skills. The last two levels, can be interpreted as re?nement of basic motor skills and the creation of new movement patterns to achieve the same objectives. In the context of manipulative movements, in the last two levels, slow, sti? and cautious movements transform to smooth, ballistic trajectories requiring much less mental concentration.25 – 27 This transformation from a cognitive form of processing to a more automatic one has been described by terms such as explicit versus implicit,28 deliberative versus reactive,29 declarative versus procedural,30 and declarative versus re?exive.31 The emulation of such taxonomy in machine learning is not fully achievable due to basic di?erences between humans and machines. In this work, the extent to which the manipulative learning can be emulated through demonstration is explored with particular focus on stages (a), (c), and (d)–(f), as these are of particular relevance to manufacturing applications of machines and robots. 4. Overview of the Proposed System A manipulation skill is the ability to transfer, physically transform or mate a part with another part. A speci?c manipulation skill consists of a number of basic skills that when sequenced and integrated can achieve the desired manipulation outcome. The manipulation task (Ms ) is applied to the part by the human operator through an action uh (t), transferring the part from an initial state of xh (ti ) to a ?nal state of xh (tf ). The control action command uh , provides position and force/torque settings. Depending on the type of manipulation, the state vector can represent position, orientation, and dimension of the part or its contact forces/torques with the

Teaching Manipulation Skills to a Robot

93

Human Operator

uh
Force/position Feedback Module Manipulator Task Planner Module

Haptics Device

Virtual Manipulation Environment

yh

Prior Knowledge about the task

Perception Module

Learning Module

ym

+ +
Force Sensor

Force/Torque

u

m

Lower Level Controller

Robotics Manipulator Position

Manipulation Task

Fig. 1.

Overall model of the system.

environment. The measured state variables at any instant of time t will represent the output of the manipulation system y h (t). The variables x, u and h are vectors. Figure 1 describes the structure of the system that will be developed in this research. As shown in this ?gure, the robotics manipulator mimics the behaviour of the human operator by acquiring the skills and producing the machine control action um (t) from y h (t) as illustrated in Fig. 1. It is envisaged that the proposed system will closely emulate and facilitate the relevant stages of human motor learning taxonomy for a robotics manipulator as described below: (i) The human operator performs the manipulation task in a virtual environment using a haptic device. The haptic device provides the operator with contact forces and torques similar to those in a real life operation. (ii) The information produced in the virtual environment, y h (t), is used by the Perception module to identify the basic skills and functions employed in the operation and to extract the algorithm sequencing the applied skills. This is stage (a) (perception) of the taxonomy. (iii) The information produced in (ii) is passed to the Manipulator Task Planner to be translated into position/force trajectories and associated control algorithms for the robotics manipulator. Initially um is generated based on the information received from the Perception module, the output of the machine manipulation system y m (t), and prior knowledge about the task. The performance of the manipulator under um is then compared with the expected behaviour. The manipulator trajectory and um are adjusted according to the error to produce a behaviour as close as possible to the manipulation performed by the human. This is stage (c).

94

Y. Chen & F. Naghdy

(iv) After satisfactory imitation, information from the Learning Module will be taken into account to calculate um . The Learning Module performs various optimization processes to enhance the performance [stages (d)–(g)]. Such a system will be most e?ective when the Perception and Learning modules are generic. The Manipulation Virtual Environment will be dependent on the application and the Task Planner will be dependent on the manipulator employed. The work so far has focussed on Virtual Manipulation Environment and the Perception Module. 5. Virtual Manipulation Environment The data used by the machine to acquire basic manipulation skills is generated through a haptic-rendered virtual environment. This approach o?ers a number of advantages compared to other methods of obtaining training data including: (1) The training data (e.g. velocities, angles, positions, forces and torque) can be extracted and record directly which simpli?es the data collection process.1 (2) The environment can be easily modi?ed and changed as the manipulation process and its requirements are changed. (3) The risk of breakdown and breakage of the system is very low. (4) Dangerous and costly environments can be easily constructed and simulated. (5) A user-friendly environment for the human operator can be developed. The concepts and methodologies developed in this work are demonstrated for the peg-in-hole insertion which represents a typical manipulation task in assembly. The haptic rendering is provided through a 3 degree of freedom generic device called Phantom manufactured by Sensable. It allows users to directly interact with digital objects as they do in the real world. GHOST(R) SDK, the software coming with Phantom, can handle complex computations and allow developers to deal with simple, high-level objects and physical properties like location, mass, friction and sti?ness.32 The peg-in-hole insertion model in the virtual environment is constructed in two stages of geometrical model and haptic rendering model. In the following sections, each approach is described. 6. Geometrical Model The geometrical modeling is carried using Ghost modelling package supplied with PHANTOM haptic device. Developed virtual environment and its coordinates are shown in Fig. 2. The peg is coupled with the phantom (i.e. the manipulation point) through a spring-damper system. The peg is a dynamic rigid object in the virtual environment. The force and torque reacted to the peg are transferred to PHANTOM through the spring-damper system. The hole is static in the environment while the peg can be translated and rotated.

Teaching Manipulation Skills to a Robot

95

y

z

x

Fig. 2.

Peg-in-hole insertion virtual environment.

Triangle Polygons

the center of the hole

Fig. 3.

Peg-in-hole insertion virtual environment.

The geometrical model of the hole is constructed by a triangle polygon mesh. The mesh is formed by ?rst de?ning 4 triangle polygons and then rotating the 4 polygons around the center of the hole (Fig. 3). The graphic model of the hole is constructed in the similar way using openGL coming with the Ghost SDK. In order to reduce the computation time, the polygons on the top and inner surface of the hole are considered during haptic rendering. If the normal of the polygons on the inner surface is pointed to the viewer, these polygons are visible. Otherwise, they are invisible. The geometrical model of the hole can be also constructed using two cylinders forming the inner and outer surfaces of the hole. At this stage the polygon mesh is employed to construct the geometrical model of the hole as it is more simple for haptic rendering. 7. Haptic Rendering Model The haptic rendering model of the peg-in-hole insertion generating force data is constructed using the PointShell method. The PointShell of an object is the collection

96

Y. Chen & F. Naghdy

(a) Fig. 4.

(b) PointShell and singular points.

of all the points forming the surface of the object. A surface normal vector pointing inwards is assigned to every point on the PointShell to provide the contact force direction.33 Figure 4(a) illustrates the normal vectors of a PointShell. In the PointShell developed for the peg-in-hole insertion, the directions of the vectors assigned to singular points are not pre-determined as they depend on the normal of the contact surface [Fig. 4(b)]. The directions are assigned when the peg and hole are in contact. This is su?cient for this stage of the application as it does not require voxmap that is created by discretizing the static environment space and used for collision detection by probing it with the PointShell of a dynamic object.33,34 Two di?erent approaches have been explored to build the haptic rendering model that generates the force data. In the ?rst approach, the peg is de?ned as a dynamic object described by a collection of points as shown in Fig. 5(a). The direction of the force generated at a dotted point is normal to the surface of the hole at that point. For a black point, on the other hand, the direction of the generated force is opposite to the surface normal of the peg as shown in Fig. 5(b). The force signals are generated only for the points which located within the hole object as shown in Fig. 5(c). In the second method, only the points at the two ends of the peg and the hole are considered in the modelling [Fig. 6(a)]. It is possible to reduce the points further by considering the points on sides where the hole and peg make contact with each

(a) Fig. 5.

(b)

(c)

The ?rst approach used in physical modeling.

Teaching Manipulation Skills to a Robot

97

(a) Fig. 6.

(b) The second method used in physical modeling.

other as illustrated in Fig. 6(b). The direction of the force vector at dotted and black points are the same as the ?rst method. The second method is used at this stage as it requires less computation time. The PHANTOM provides 3 d.o.f only along X-, Y - and Z-axes. The rotation of the peg about Z- and X-axes (Fig. 2) is controlled by activating certain keys on the keyboard. The magnitude of the force generated at each point is calculated by f = k ? d + c? ad + b? v where ? d the depth of the point in another static object ? ad is the accumulated depth during a continuous contact between the point and the static object ? v is the velocity of the object and is calculated by the current Depth minus the last Depth divided by the sampling time ? k is the sti?ness coe?cient ? b is the damping coe?cient ? c is the coe?cient for the accumulated depth. Friction can be added to the contact force. The depth of a black point in the peg and force direction can be found using the intersect function provided by the Ghost SDK for a cylinder. The intersect function ?nds the intersection between an object and a line de?ned by the current and previous locations of a point on the other object. This is illustrated for intersection between the peg and the hole in Fig. 7(a). The depth is calculated by projecting the vector which points form the current point to the intersection point along the normal of the object at the intersection point. In the virtual environment, the black points are static since the hole is static, while the peg is dynamic. Hence, the position of a previous point is estimated based on the current position of the peg [Fig. 7(b)]. It is assumed that coordinates of the estimated previous point in the coordinate frame of the current peg position are the same as the coordinates of the current point in the coordinates frame of the previous peg position. Thus the depth of the black point in the peg can be determined as shown in Fig. 7(a). (1)

98

Y. Chen & F. Naghdy

&XUUHQW SRLQW

,QWHUVH.WLRQ SRLQW 3UHYLRXV SRLQW

3UHYLRXV SRVLWLRQ RI WKH SHJ

&XUUHQW SRVLWLRQ RI WKH SHJ

'HSWK
(a) Fig. 7. Depth calculation. (b)

$VVXPHG SUHYLRXV SRLQW

The data produced by the haptic rendering model is stored in a ?le at each sampling time. It consists of x(k), y(k) and z(k): θx (k), θz (k): Fx (k), Fy (k) and Fz (k): Tx (k) and Tz (k): ?θx (k), ?θz (k) and ?y(k): Coordinates of the point Direction of the peg Reactive forces to the peg Reactive torques to the peg Angular control signals about X- and Z-axes and position control signal about Y -axis.


                     6HULHV 6HULHV 6HULHV





6HULHV                               6HULHV 6HULHV







(a) Position (mm)
 

(b) Forces (N)





                                 6HULHV 6HULHV 6HULHV  6HULHV 6HULHV                            



 









(c) Torque (Nmm) Fig. 8.

(d) Angles (rad)

An example of data extracted from virtual environment.

Teaching Manipulation Skills to a Robot

99

The sampling time can be changed according to the application. An example of teaching data extracted from virtual environment for the perception module is shown in Fig. 8. Friction is not considered in this example.

8. Perception Module A manipulation task consists of a sequence of basic skills. Identi?cation of these basic skills and mapping them on to equivalent series of robot manipulation primitives, form the core of an algorithm for skill acquisition and transfer of those skills from human to a robotic manipulator. Such skill-based manipulation is an e?ective way for a robotic manipulator to execute a complex task. This takes place in the Perception Module. The basic skills are de?ned according to the contact state transition of a task, independent from the con?guration of a manipulator.35 In a virtual manipulation environment, the basic skills can be also identi?ed by the contact states and state changes.36,37 Using this approach, the basic skills can be automatically extracted from the manipulation carried out in the virtual environment. The contact states can be de?ned based on the available degrees of freedom (DOF). For example, it is possible to identify three types of contact states38 for rotation: ? Maintaining state: This contact relation can be maintained even if the object is rotated around the contact point [Fig. 9(a)]. ? Detaching state: To maintain this contact state the object cannot be rotated [Fig. 9(b)]. ? Constraining state: In this contact state the object cannot be rotated at all [Fig. 9(c)]. The change of the contact states is based on the change of DOF. For example, by rotating the object in Fig. 9(a), the contact state can change from maintaining to detaching. The contact states can be also de?ned based on the geometrical and spatial relationships between two contact objects as shown in Fig. 10. The three contact

(a) Fig. 9.

(b)

(c)

Maintaining, detaching and constraining states.

100

Y. Chen & F. Naghdy

(a) Fig. 10.

(b) Three contact states of the insertion phase.

(c)

states illustrates in Figs. 10(a)–10(c) are di?erent con?gurations between the peg and hole during insertion process. The Perception Module generates the robotics manipulation motions according to the environment and the task description. It acquires skills on line, adapts to states not encountered, and updates its skill library accordingly. Thus, it can cope with rapid changes of task and environment. The acquired manipulation skills from the virtual environment are translated into machine manipulation skills. One option is to use heuristic methods such as adaptive fuzzy logic and neural networks. Such systems, though simple and automatic, have serious limitations when applied to uncertain systems. In this work, a generic and robust method proposed by Lee and Chen39 is employed as described below. The coordinates of the peg-in-hole insertion experimental rig are assumed to be the same as those of the virtual environment. It also provides the same DOF as the virtual model. It is assumed that peg is located above the hole and focus is on the insertion phase only. It is possible to identify three contact states for the insertion phase as shown in Fig. 10. This will require two basic skills to complete the task35 : (1) “Move-to-touch” Skill: i.e. move the peg to touch the hole. If there is increased resistance in the direction of motion, then it means the peg is jammed inside the hole as shown in Fig. 11(a). (2) “Rotate-to-insertion” Skill: i.e. rotate the hole to realign the hole with the peg to complete the insertion process [Fig. 11(b)]. This is equivalent to rotating the peg to realign it with the hole. The insertion is considered successful, if the peg can be pressed down for a certain distance without jamming. The ?rst skill is a simple skill that can be applied directly by pushing the peg until it is jammed. The second skill can also be applied directly by maintaining the contacts while changing the orientation of the manipulated object and then pushing the object.35 This may not be optimal in terms of the shortest manipulation time or the number of manipulation actions. The method employed in this work represents the second skill in terms of a feasible state transition region de?ned in the augmented state space, generating optimal sequence of state transitions.39 The augmented state space is formed by

Teaching Manipulation Skills to a Robot

101

(a) Fig. 11. Insertion steps.

(b)

the current state and the next state. A point in the augmented state space represents a feasible state transition. At ?rst, the data of many successful traces with di?erent initial states are collected from the virtual environment. Feasible state transition region is then approximated by multiple sizes of hyper-ellipsoids using Multi-resolution Radial basis Competitive and Cooperative Network (MRCCN). This is a bi-directional manyto-many mapping neural network capable of learning complex many-to-many relations of input and output, deriving more than one solution from the given value if necessary.39 The hidden units in MRCCN are Radial Basis Function Units (RBFUs). A RBFU is denoted by three parameters that de?ne a hyper-ellipsoid in the augmented state space, called an accommodation boundary. Figure 12 shows the accommodation and generation of RBFUs for a one-dimensional system. The initial sample data is used as a reference point and given a circular characteristic function (circle). When a subsequent teaching sample is within the circle, the reference point moves toward the sample and the shape also changes towards it [Fig. 12(a)]. When the sample is out of the circle, a new RBFU is created at the sample point with

8SGDWHG UHIHUHQ.H SRLQW 6DPSOH SRLQW 5HIHUHQ.H SRLQW
(a) Fig. 12.

6DPSOH SRLQW

5HIHUHQ.H SRLQW
(b)

Accommodation and generation of RBFUs.

102

Y. Chen & F. Naghdy

15 X(k+1) X(k+1) 1 0 5 0 0 5 X(k) 1 0 15

15 X(k+1) 0 5 X(k) 1 0 15 1 0 5 0

15 1 0 5 0 0 5 X(k) 1 0 15

(a) Fig. 13.

(b) An example of a one-dimension dynamic system.

(c)

a circular boundary [Fig. 12(b)].40 This process is repeated for the updated and newly generated boundary. In the second stage an optimal path from any state to the goal state in the feasible state transition region is sought based on the Bidirectional Dynamic Path Planning (BDPP) algorithm.39 Figure 13(a) provides an example of the feasible state transition region of a one-dimensional dynamic system. The result of multi-resolution clustering looks like Fig. 13(b). The optimal transition from 0 to 12.5 is labeled in Fig. 13(c) by three points (0, 5), (5, 7.5), and (7.5, 12.5), which means the optimal path is 0 → 5 → 7.5 → 12.5. If we constrain the peg to move along Y -axis for both the virtual environment and the experimental rig, the current state is represented by [y(k), θx (k), θz (k), Fx (k), Fy (k), Fz (k), Tx (k), Tz (k), ?θx (k), ?θz (k), ?y(k)] and the next state by [y(k + 1), θx (k + 1), θz (k + 1), Fx (k + 1), Fy (k + 1), Fz (k + 1), Tx (k + 1), Tz (k + 1)] where ?θx (k), ?θz (k) and ?y(k) are control parameters. In order to reduce computing time, the search can aim at ?nding a number of optimal paths with di?erent initial states. During physical manipulation, if the initial feedback (y(k), θx (k), θz (k), Fx (k), Fy (k), Fz (k), Tx (k), Tz (k)) from the experimental rig (almost) matches the ?rst 8 parameters of any states in the optimal paths, the remaining manipulation steps can follow the corresponding optimal path. The overall process is described in Fig. 14. In the Perception Module, two high level skills are initially identi?ed and then the second skill is acquired. In Robot Trajectory planning, the equivalent skill is chosen from the skill library accordingly and applied based on the current state and the representation of this skill. The rest of the system has not been implemented fully yet.

9. Conclusion The work studying the transfer of manipulation skills from human to machine through a virtual environment with haptics feedback is reported.

Teaching Manipulation Skills to a Robot

103

Human Operator Feedback Haptics Virtual

Robot Trajectory planning

Skill Learning Module

Move the peg to touch the hole

Turn the hole and insert the peg

Move the peg to touch the hole

Turn the hole and insert the peg

Skill Library Skill 1 Skill 2

Switching

Skill 1

Skill 2

Acquired Skills Controller Feedback

Experiment Rig

Sensors

Insertion Task

Fig. 14.

Overall system model.

Broadly, the project has a generic scope that is novel and innovative. It explores how human motor manipulation skills can be replicated by a machine. The model identi?ed for human psychomotor learning is emulated in the machine to achieve di?erent stages of motor learning. The work also aims at providing a new insight into the nature of transfer of manipulation skills from human to machine. It is exploring new generic intelligent algorithms and methodologies to emulate di?erent stages of human psychomotor learning in machine including perception, imitation, mechanism and complex/overt response. This makes the project signi?cantly di?erent from other machine learning projects that concentrate mainly on machine learning in the cognitive domain. The overall concept has been presented. The work at this stage has its focus on the peg-in-hole insertion process. The concepts and methodologies developed for this application will be expanded in the next stage of the project towards more generic algorithms and models. The progress achieved so far in the development of the haptic rendered virtual environment and perception module has been presented. The geometrical and force modeling of the virtual environment are described and the results are illustrated. The work conducted on the development of the Perception Module to acquire basic manipulation skill is presented.

104

Y. Chen & F. Naghdy

The work is now in progress to develop the task-planning module towards full implementation of the paradigm as is illustrated in Fig. 1. Acknowledgment The support of CRC (Cooperative Research Centre) for Intelligent Manufacturing and Systems Ltd. for this project is acknowledged. References
1. F. A. Mussa-Ivaldi, N. Hogan and E. Bizzi, Neural, mechanical and geometric factors subserving arm posture in humans, J. Neurosci. 5 (1985) 2732–2743. 2. T. L. Perez and P. H. Winston, LAMA: A language for automatic mechanical assembly, Proc. Int. Joint. Conf. Artif. Intell. (1977) 321–333. 3. H. Matsubara, A. Okano and H. Inoue, Design and implementation of a task level robot language, J. Robotics Soc. Japan 3, 3 (1985). 4. S. J. Derby, General robot arm simulation program (GRASP); Parts 1 and 2, ASME Comput. Eng. Conf., San Diego (1982) 139–154. 5. A. Naylor, I. Shao, R. Volz, R. Jungelas, P. Bixel and K. Lloyd, PROGRESS — A graphical robot programming system, Proc IEEE Int. Conf. Robotics Automat. (1987) 1282–1291. 6. B. Dufay and J. C. Latombe, An approach to automatic robot programming based on inductive learning, Int. J. Robotics Res. 3, 4 (1984). 7. P. D. Summers and D. D. Grossman, SPROBE: An experimental system for programming robots by example, Int. J. Robotics Res. 3, 1 (1984). 8. H. Asada and Y. Assari, “The direct teaching of tool manipulation skills via the impedance identi?cation of human motions, Proc. IEEE Int. Conf. Robotics Automat. (1988) 1269–1274. 9. T. Sato and S. Hirai, Language-aided robotic teleoperation system (LARTS) for advanced tele-operation, IEEE Trans. Robotics Res. 3, 5 (1987) 476–481. 10. S. Suji, A. Morizono and S. Kuroda, Understanding a simple cartoon ?lm by a computer vision system, Proc. Int. Joint. Conf. Artif. Intell. (1977) 609–610. 11. R. Thibadeau, Arti?cial perception of actions, Cognitive Sci. 10, 2, 117–149. 12. Y. Kuniyoshi, M. Inaba and H. Inoue, An approach to real time action understanding using robotic vision — Step 1: Generating state description sequences of an attentiongetting object, Proc. Ann. Conf. Robotics Soc. Japan (1987) 435–436. 13. K. Ikeuchi and T. Suehiro, Towards an assembly plan from observation, Technical Report CMU-CS-91-167, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, 1991. 14. N. Haas, Learning by ostentation for robotics assembly, Proc. SPIE, 1991. 15. D. Sato, S. Yamada and M. Uchiyama, Human skill analysis based on multi-sensory data, Proc. IEEE Int. Workshop Robot Human Comm. (1987) 278–283. 16. Y. Kuniyoshi and M. Inbana, Learning by watching: Extracting reusable task knowledge from visual observation of human performance, IEEE Trans. Robotics Automat. 10, 6 (1994). 17. D. A. Pomerleau, Neural Network Perception for Mobile Robot Guidance (Kluwer Academic Publishers, 1993). 18. G. Z. Grudic and P. D. Lawrence, Human-to-robot skill transfer using the SPORE approximation, Proc. IEEE Int. Conf. Robotics Automat. (1996) 2962–2967.

Teaching Manipulation Skills to a Robot

105

19. M. Kaiser and R. Dillmann, Building elementary robot skills from human demonstration, Proc. IEEE Int. Conf. Robotics Automat. (1996) 2700–2705. 20. D. A. Handleman and S. H. Lane, Human-to-machine skill transfer through cooperative learning, Intell. Cont. Syst. Theory Appl., eds. M. M. Gupta and N. K. Sinha (1996) 187–205. 21. J. Adams, A closed-loop theory of motor learning, J. Motor Behaviour 3, 2 (1971) 111–149. 22. A. J. Harrow, A Taxonomy of the Psychomotor Domain (David McKay Company Inc., New York, 1972) 16. 23. K. U. Smith and W. H. Smith, Perception and Motor (W. B. Saunders, 1962). 24. E. J. Smith, The classi?cation of educational objectives: Psychomotor domain, University of Illinois Research Project No. OE 5 (1966) 85–104. 25. M. Jeannerod, The Neural and Behavioural Organisation of Goal-Directed Movements (New York, 1988). 26. D. A. Rosenbaum, Human Motor Control, 1991. 27. D. H. Holding, Human Skills, 1989. 28. R. C. Mathews, R. R. Buss, W. B. Stanely, F. Blanchard-Fields, J. R. Cho and B. Druhan, Role of implicit and explicit processes in learning from examples: A synergistic e?ect, J. Exp. Psychol.: Learning Memory Cognition 15, 6 (1989) 1083–1100. 29. M. Schoppers, Universal plans for reactive robots in unpredictable environments, Proc. 10th Int. Joint Conf. Artif. Intell. II (1987) 1039–1046. 30. J. R. Anderson, Acquisition of cognitive skill, Psychol. Rev. 89 (1982) 369–406. 31. I. Kupfermann, Learning, Principles of Neural Science, eds. E. Kandel and J. Schawarts (1985) 810. 32. SensAble Technologies — Haptics Research. [online], available http://www.sensable.com/haptics/ 33. M. Renz, C. Preusche, M. P¨tke, H.-P. Kriegel and G. Hirzinger, Stable haptic ino teraction with virtual environments using an adapted voxmap-pointshell algorithm, Proc. Eurohaptics Conf., Birmingham, UK, 2001. 34. W. A. McNeely, K. D. Puterbaugh and J. J. Troy, Six degree-of-freedom haptic rendering using voxel sampling, Proc. ACM SIGGRAPH (1999) 401–408. 35. A. Nakamura, T. Ogasawara, T. Suehiro and H. Tsukune, Skill-based back-projection for ?ne motion planning, Proc. 1996 IEEE/RSJ Int. Conf. Intell. Robots Syst., IROS’96 2 (1996) 526–533. 36. J. Takamatsu, H. Kirnura and K. Ikeuchi, Classifying contact states for recognizing human assembly task, Proc. 1999 IEEE/SICE/RSJ Int. Conf. Multisensor Fusion Integration Intell. Syst. (1999) 177–182. 37. H. Onda, H. Hirukawa and K. Takase, Assembly motion teaching system using position/force simulator — Extracting a sequence of contact state transition, Proc. 1995 IEEE/RSJ Int. Conf. Intell. Robots Syst. — Human Robot Interaction Coop. Robots 1 (1995) 9–16. 38. J. Takamatsu, H. Tominaga, K. Ogawara, H. Kimura and K. Ikeuchi, Extracting manipulation skills from observation, Proc. 2000 IEEE/RSJ Int. Conf. Intell. Robots Syst. 1 (2000) 584–589. 39. S. Lee and J. Chen, Skill representation and acquisition, Intell. Syst. 21st Century, IEEE Int. Conf. Syst., Man Cybernet. 5 (1995) 4325–4330. 40. S. Lee and S. Shimoji, RCCN: radial basis competitive and cooperative network, Proc. Fourth Int. Conf. Tools Artif. Intell., TAI’92 (1992) 78–84.


相关文章:
工业机器人示教器功能分析和发展趋势预测_图文
机器人操作流程控制简图 在这种示教方式,示教盒...熟练的操作技能,并需要现场近距离示教操作,因 而...像、力以及图形等多种交互设备实时地与虚拟环境交互...
示教再现机器人
在离线示教法 ( 离线编程) ,通过使用计算机内存储的机器人模型 (CAD 模型)...2. 3 虚拟示教编程 直接示教面向作业环境,相对来说比较简单直接、 适用于批量...
数学模型在机器人技术中的应用
图 1 示教再现机器人 (2) 感觉的机器人 这种...例如在抓一个鸡蛋的时候,它能通过一个触觉,知道它...在基于虚拟现实的机器人示教过程,虚拟环境与真实...
机器人示教编程与离线编程的选择
总结一下,示教编程以下优缺点: 优点: 编程门槛...在电脑里重建整个工作场景的三维虚拟环境,然后软件可以...同时配合软件操作者的一些操作,自动生成机器人的运动...
机器人示教编程与离线编程的选择
总结一下,示教编程以下优缺点: 优点: 编程门槛...在电脑里重建整个工作场景的三维虚拟环境,然后软件可以...同时配合软件操作者的一些操作,自动生成机器人的运动...
触觉与视觉
机器人领域使用触觉传感器的目的在于获取机械手与 工作空间中物体接触的关...综上所述,机器人触觉和视觉在当今时代环境下,将会高速发展,并快速用于 工业生活...
机器人作业
到第二阶段的时候这些自动化设备具有了一定的触觉、...到为了保证在打磨中保证转动, 不对机器人的末端运动...手把手示教 操作比较复杂,而示教器示教则可大大...
机器人课程考试复习题库
人自动把顺序、位置和时间等具体数值记录在存储器...及其关速度等信息的示教,然后用操作盘对机器人...内部信息传感器和外部信息传感器,如视觉、听觉、触觉...
机器人的发展及应用
这只是第一代机器人,属于示教再现型,即 人手把着...境有一定的感知能力,并具有听觉、视觉、触觉等功能...灵活调整自己的工作状态,保证在适应环境的情 况下...
机器人在医学的应用
型机器人、示教再现型机器人、数控型机 器人、...实验室机器人:由于在实验室操作复 杂,但都是一些...日本现在进行的手指触觉机器人的研究 已有很大的进展...
更多相关标签: