Research Article Open Access
Three Dimensional Computer Vision-Based Alternative Control Method For Assistive Robotic Manipulator
Hyun W. Ka1,2*, Dan Ding1,2, Rory Cooper1,2
1Human Engineering Research Laboratories, Department of Veterans Affairs, Pittsburgh, PA
2Department of Rehabilitation Science and Technology, University of Pittsburgh, Pittsburgh, PA
*Corresponding author: Hyun W. Ka, University of Pittsburgh, Pittsburgh, PA 6425 Penn Ave. Suite 400 Pittsburgh PA 15206, E-mail: @
Received: July 16, 2015 ; Accepted: February 11, 2016; Published: February 23, 2016
Citation: Ka HW, Ding D, Cooper RA (2016) Three Dimensional Computer Vision-Based Alternative Control Method For Assistive Robotic Manipulator. Int J Adv Robot Automn 1(1): 1-6.
Abstract
JACO (Kinova Technology, Montreal, QC, Canada) is an assistive robotic manipulator that is gaining popularity for its ability to assist individuals with physical impairments in activities of daily living. To accommodate a wider range of user population especially those with severe physical limitations, alternative control methods need to be developed. In this paper, we presented a vision-based assistive robotic manipulation assistance algorithm (AROMA) for JACO, which uses a low-cost 3D depth sensing camera and an improved inverse kinematic algorithm to enable semi-autonomous or autonomous operation of the JACO. The benchtop tests on a series of grasping tasks showed that the AROMA was able to reliably determine target gripper poses. The success rates for the grasping tasks ranged from 85% to 100% for different objects.

Keywords: Rehabilitation robotics, Vision-based robot control, Alternative robotic manipulation, Human-robot interaction, Assistive technology
Introduction
Assistive robotic manipulators have long been recognized to have great potential to assist individuals with physical disabilities in a range of Activities of Daily Living (ADLs) [1 -6]. In 2013, the US Department of Veteran Affairs prescribed about 170 assistive robotic manipulators to veterans with disabilities to support their independent living. One of the most popular assistive robotic manipulators is JACO from Kinova Technology (Montreal, QC, Canada). JACO is composed of six inter-linked segments with a three-fingered hand. A default control method for JACO is a 3D joystick with 7 buttons and a knob. However, people who have severely impaired motor functions or have a combination of multiple disabilities have found it difficult or impossible to independently operate it [3].

Vision-based autonomous control has been investigated as one of the solutions to accommodate people who cannot effectively use the manual control methods [7-15]. Vision-based control can transfer the loading in positioning and fine manipulation to the autonomous algorithm to reduce the complexity exposed to the user. To implement the vision-based autonomous control, many researchers adopted an eye-in-hand camera [7, 8, 13, 15], on the robot gripper or wrist to guide the robot towards an object of interest. This approach needs to update the object locations continuously until the end-effector acquired the target object, and thus is computationally expensive. Some researchers mounted a camera on the fixed position at the robot base or shoulder [12]. While this approach has advantage of finding a path and grasping plan even when the object is occluded from the starting location or folding position [14], it requires the knowledge of the target object as well as the surroundings in advance to localize the target object and plan a trajectory [16]. Other researchers use the combination of the above two approaches to provide more reliable and robust control [10, 14]. However, the combined approach can significantly increase the implementation cost and system overhead. More recently, the use of 3D camera like Microsoft Kinect has been investigated in assistive robot applications [11].

We have developed and evaluated a vision-based Assistive Robotic Manipulation Assistance Algorithm (AROMA) for JACO, which uses another kind of low-cost 3D depth sensing camera and an improved Inverse Kinematic (IK) algorithm over the IK algorithm provided by the JACO Application Program Interface (API). In addition, AROMA was developed on a Windows operating system instead of the Robot Operating System (ROS), which makes it easier for the algorithm to be adopted by nontechnical users and clinical professionals
Design Approach
AROMA consists of two inter-related modules: 3D vision module and custom IK module. The following diagram Figure 1 shows how the AROMA works.
Figure 1: AROMA Activity Diagram.
3D Vision Module
We adopted a low-cost short-range 3D depth-sensing camera (Senz3D manufactured by Creative Labs, Inc., Milpitas, CA) in this study and mounted it on the robot base Figure 2. The Senz3D uses the 'time-of-flight' technique to obtain depth information within its field of view (diagonal 70 degrees) and working range (20- 90m) at a maximum resolution of 320x240. It is known that the 'time-of-flight' technique in general outperforms the structured light technique used in Microsoft Kinect [17]. The Senz3D can generate a 3D point cloud in which each point represents the distance (15-90cm) to objects within its field of view. Based on the 3D point cloud, the shape and the dimensions (width and height) of the target object can be estimated. To stabilize the depth data and increase the accuracy of the estimation, we used depth processing techniques provided by RGBD module (by Vincent Ronald) integrated into Intel's open source computer vision library (Open CV 3.0 alpha), including moving average filtering, and segmentation. Based on the estimated position and dimension of the target object, the end-effector pose (position and orientation) was calculated and fed to the custom IK module. The 3D point cloud-based approach is less dependent on diverse lighting conditions, as well as invariant to rotation as opposed to conventional approaches which usually require various poses of an object.
Custom Inverse Kinematics Module
Before developing the custom IK module, we investigated two inherent problems with JACO. One problem is that the JACO API provides an IK function where the input is the center of the wristto- hand link, instead of the target end-effector pose. Because the default IK function does not consider the virtual link between the end -effector and the target object, there are inevitably collisions during object manipulation when default IK is used.

Another problem is the JACO workspace and positioning accuracy. According to the JACO technical documents, JACO can reach approximately 90cm in all directions using joystick control. However, when using the default IK function, we noticed that JACO has a decreased working space due to the embedded singularity avoidance algorithms. To examine the actual workspace where there is no limitation in performing translational, rotational and grasping motions, we programmed the JACO to automatically reach and perform all three basic motions (translational, rotational, and grasping) at 1cm resolution within the theoretical workspace of 90cm radius and 110 degrees of phi φ Figure 3. The JACO robot arm was found fully functional within the area 3 (i.e., a quarter-ellipsoid with about 62cm radius and 110 degrees of
Figure 2: JACO with Senz3D camera.
Figure 3: JACO working spaces.
phi φ excluding the dead zone). In the area 2 (a quarter-ellipsoid with around 73cm radius and 110 degrees of phi φ), one of the three basic motions does not work and JACO gets stuck due to the singularity avoidance algorithms, until manual control with the physical joystick overrides the current command. In the area 1, in addition to the same problem as in the area 2, the positioning accuracy of the JACO arm is severely compromised, and the endeffector has difficulty keeping still in place.

To address these issues, we developed a custom IK module that considered the missing tip-target link kinematic based on the target object pose through the 3D vision module and the robot parameters of the JACO robotic arm as shown in Table 1-3. To compute minimum effort IK solution and plan trajectory to reach to a desired goal position, we adopted OpenRAVE's IKFast robot kinematics compiler, which analytically solves and generates optimized IK functions. We also adjusted dynamics caused by common factors like gravity and positioning tolerances by refining the IK solution using the Levenberg-Marquardt algorithm, also known as a damped least square method [18, 19]. The refined IK solution was sent to the JACO controller where virtual joystick signals emulating physical joystick commands were used to control JACO.

The arm sagging issue, that is the hand position of the JACO arm drops down 1-2cm whenever grasping commands are sent, was solved by using Cartesian command information (API function: GetCommandCartesianInfo()), instead of relying on the reported current arm position (API function: GetHandPosition()).
Table 1: D-H Parameters of JACO.

 

alpha(i-1)

a(i-1)

di

theta1

1

0

0

D1

q1

2

-pi/2

0

0

q2

3

0

D2

0

q3

4

-pi/2

0

d4b

q4

5

2*aa

0

d5b

q5

6

2*aa

0

d6b

q6

Table 2: Link length values.

 

Link length values (meters)

D1

Length

      Explanation

0.2102

Base to elbow

D2

0.4100

Arm

D3

0.2070

Front arm

D4

0.0750

First wrist

D5

0.0750

Second wrist

D6

0.1850

Wrist to the hand

Table 3: Alternate parameters.

Alternate parameters

aa

((11.0*PI)/72.0)

ca

(cos(aa))

sa

(sin(aa))

c2a

(cos(2*aa))

s2a

(sin(2*aa))

d4b

 (D3 + (ca-c2a/s2a*sa)*D4)

d5b

 (sa/s2a*D4 + (ca-c2a/s2a*sa)*D5)

d6b 

(sa/s2a*D5 + D6)

Methods
Instruments
The instruments used for testing AROMA included a JACO robotic arm, a Senz3D camera, and a laptop computer running the custom software under Windows 7 Operating System. The JACO robotic arm was a research edition assembled in the year of 2012. All configuration parameters were maintained at default values throughout the experiment. The firmware version was 5.0.5.0033 and the API version was R5.0.2. The robot arm was fixed to a table using clamps and the Senz3D camera was attached to JACO 2cm below from the center of the robot base. The custom software, written in the C#/C++ programming language, controlled the 3D vision and the custom IK algorithms, while monitoring the robot behaviors.
Data Collection
In the experiment, picking-up/grasping tests were conducted with two different kinds of objects: balls with 3 different diameters (4.5cm, 6.5cm, and 8.5cm) and a bottle of water Figure 4. The ball experiment started with commanding the robot to pick up a ball placed at a random position within JACO's theoretical workspace on the table. Upon successful grasping of the ball, the robot picked up the ball 50cm high and then dropped it off to a random position. The robot then went back to the default home position, and then automatically repeated the same procedure 100 times. If the robot failed to grasp the ball in two consecutive times,
Figure 4:Test Objects.
the investigator put it in the random position manually. For the bottle experiment, the investigator manually put it at the random location (calculated by a random number generator) each time and the experiment was repeated 20 times. Throughout the tests, the estimated object locations and dimensions were recorded, and performance measures including time to grasp the target object (from the start of each trial to when the target object is picked up 50cm high) and success rate were calculated.
Results
The results from the experiments are presented in Table 4. From Table 1, it took longer to grasp the bottle than the balls, possibly due to different gasping strategies Figure 5. Based on the estimated dimension of the target object, either grasping from the side or the top will be automatically selected.

As for the success rate, grasping the small ball was least reliable with 93% success rate. In the failed trials, the JACO hand was able to pick up the ball but then dropped it off before reaching to the target height. We speculated that the glossy surface of the small ball might compromise the object pose estimation and thus lead to unreliable grasping points. Table 5 shows the deviations between actual object widths and estimated ones for each ball. The small ball not only had the largest variance among the three balls, but also had tendency to underestimate the object size. The success rate of the bottle grasping test was 85%. The failed trials were mostly due to the collisions between the JACO hand and the bottle. For both the small ball and bottle experiments, the object locations were well distributed and the locations (marked in red) where unsuccessful trials occurred were highly scattered and no systemic pattern was found Figure 6 and Figure 7. In addition to
Table 4: Test Results.

      Ball Size

Ball Experiment

Bottle Experiment

Average

Grasping

Time(sec)

Success

Rate

Average Grasping Time(sec)

Success Rate

S

5.51 (±1.38)

93/100

5.96 (±1.95)

17/20

M

4.17 (±0.97)

100/100

L

4.46 (±1.48)

100/100


Figure 5: Different grasping strategies.

Table 5: Object Width Estimation.

Width (mm)

Small

Medium

Large

Actual

45

65

85

Estimate

39.1(±4.57)

64.3(±2.57)

84.86(±2.28)


Figure 6: Small Ball Placements (cm).

Figure 7: Bottle Placements (cm)).
the object pose estimation error, JACO's positioning tolerance of ±8mm might also affect the performance.
Discussion
Our results indicated that AROMA has potential to enable users who are current unable to use an assistive robotic manipulator to use it by providing an autonomous or semiautonomous robotic manipulation assistance. The AROMA has some advantages over the conventional vision-based approach. First, unlike other research studies [7-9, 12, 13], it relies on point clouds generated from a low-cost 3D depth sensing camera, thus the computational cost is less than the conventional 3D object pose estimation algorithms which require images of various poses such as the front side, backside, and all possible 3D rotations of the object. Chung and colleagues evaluated a visionbased autonomous function of assistive robotic manipulator, mounting a high resolution webcam on the robot shoulder [12]. In the study, they measured the task completion time and the success rate for a drinking task which consist of various subtasks including picking up the drink from a start location, conveying the drink to the proximity of the user's mouth. The average task completion time for picking up a soda can on the table was 12.55 (±2.72) seconds including the average object detection time of 0.45 (±012) seconds. The success rate of the pick-up task was 70.1% (44/62).

Second, the AROMA uses infrared images, and thus is less dependent on ambient lighting conditions than conventional image processing which requires images of an object under different lighting conditions or sources in order to improve the algorithm invariance to diverse lighting conditions. Tsui, et al. developed a vision-based autonomous system for a wheelchairmounted robotic manipulator using two stereo cameras, one mounted over the shoulder on a fixed post and one mounted on the gripper. Once the user only needed to indicate the object of interest by pointing to the object on a touch screen, the autonomous control automatically took over the rest of the task by reaching towards the object, grasping it, and bringing it back to the user [10]. They evaluated this system with 12 individuals with various physical and cognitive disabilities, where participants were asked to retrieve an object from a bookshelf. The success rate of the autonomous function was 65% (129/198). Of the 69 unsuccessful trials, 56 (81%) were due to algorithm failures. Jiang, et al. also developed a vision-based autonomous robot control system combining a JACO robot arm with two Microsoft Kinect sensors: one for recognizing user voice, gesture and body part; the other for object recognition [11]. User's voice and hand gestures were used as the robot control commands. The object recognition algorithm relied on a two-step process, which extracted the feature vector for an object using Histogram of Oriented Gradients algorithm, then trained the model and classified the objects applying nonlinear support vector machine algorithm. The system was evaluated by one participant with four different manipulation tasks (5 trials per each), including, drinking, phone calling, taking a self-portrait, and taking photos of the surroundings. The performance time ranged from 14-130 seconds and accuracy ranged from 52-98%.

Third, AROMA addressed the inherent limitations of the JACO onboard IK algorithm, including the missing tip-target link, reduced working space, and arm sagging issues. In addition, the AROMA was developed under a Windows operating system, making it not only easier to integrate new and existing alternative input devices without developing additional driver software, but also increasing the likelihood of adoption by users and clinical professionals.

However, AROMA has also several limitations. First, when dealing with the missing tip-target link, we aimed to find a goal configuration of the end-effector that matches the target object pose under the assumption that there is no obstacle between the manipulator and the target object. This may compromise the manipulation performance and safety in challenging environments such as cluttered space. To address this issue, additional sensors such as an eye-in-hand camera or force/ tactile sensors could be adopted. Second, the experiments were conducted with simple shaped objects with smooth surfaces. To accommodate a variety of everyday objects with different characteristics, the damping factor for the DLS method may need to be adjusted to achieve a balance between performance stability and speed. Lastly, it is also important to apply AROMA to real-world manipulation tasks and test it with individuals having upper extremity impairments.

In addition to supporting autonomous operation of JACO, an practical application of AROMA is to support semi-autonomous control, where direct user control is combined with robot autonomy, strategically reducing the complexity exposed to the user while keeping the user in the control loop [20]. Users usually find fine manipulation of a robot manipulator more challenging and spend more time on adjusting the end-effector position and orientation before grasping. AROMA could potentially address this issue by allowing users to use conventional input methods (e.g., joystick) to move the arm close to the target object, and then user voice control to command the robot for fine manipulation. e.g., grasping or pushing. Kim, et al. found that while user effort required for operating the robot with autonomous control was significantly less than with the manual control, user satisfaction with the autonomous control was lower than with the manual control [13]. With the semi-autonomous control, users only need to control the gross motion and leave the fine manipulation to AROMA, which could potentially lead to improved performance and satisfaction.

We are planning on two follow-up studies to apply the AROMA. One study is to apply the semi-autonomous approach to an overhead track mounted assistive robotic system called KitchenBot [21], which operates along an overhead track built into the kitchen to assist individuals with physical disabilities for tasks in a typical kitchen environment. Another study is to combine AROMA with automatic speech recognition to provide complete hands-free semi-autonomous operation.
Acknowledgement
This work is supported by Craig H. Neilsen Foundation and with resources and use of facilities at the Human Engineering Research Laboratories (HERL), VA Pittsburgh Healthcare System. This material does not represent the views of the Department of Veterans Affairs or the United States Government.
ReferencesTop
  1. Allin S, Eckel E, Markham H, Brewer BR. Recent trends in the development and evaluation of assistive robotic manipulation devices.  Phys Med Rehabil Clin N Am. 2010;21(1):59-77.
  2. G. Romer, Stuyt HJ, Peters A. Cost-savings and economic benefits due to the assistive robotic manipulator (ARM), in Rehabilitation Robotics. 2005;201-204.
  3. Maheu V, Frappier J, Archambault PS, Routhier F. Evaluation of the JACO robotic arm: Clinico-economic study for powered wheelchair users with upper-extremity disabilities. Rehabilitation  Robotics. 2011;1-5.
  4. Römer GW, Stuyt H, Peters G, Woerden KV. 14 Processes for Obtaining a Manus(ARM) Robot within the Netherlands. Advances in Rehabilitation Robotics. 2004;221-230.
  5. G. Romer, Stuyt H, Peters A. Cost-savings and economic benefits due to the assistive robotic manipulator (ARM).  Proceedings of the 2005 IEEE. 9th International Conference on Rehabilitation Robotics. 2005;201-204.
  6. King CH, Chen TL, Z. Fan Z, Glass JD, Kemp CC. Dusty: an assistive mobile manipulator that retrieves dropped objects for people with motor impairments. Disabil Rehabil Assist Technol. 2012;7(2):168-179.
  7. Driessen B, Kate TT, Liefhebber F, Versluis AH, Woerden J. Collaborative control of the manus manipulator. Universal Access in the Information Society. 2005;4(2):165-173.
  8. Tijsma H, Liefhebber F, Herder J. Evaluation of new user interface features for the manus robot arm, in Rehabilitation Robotics, 2005. ICORR 2005. 9th International Conference. 2005;258-263.
  9. Laffont l, Biard N, Chalubert G, Delahoche L, Marhic B, Boyer FC, et al. Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study. Arch Phys Med Rehabil. 2009;90(10):1740-1748.
  10. Tsui KM, Kim DJ, Behal A, Kontak D, Yanco HA. I want that: Human-in-the-loop control of a wheelchair-mounted robotic arm. Applied Bionics and Biomechanics. 2011;8(1):127-147.
  11. Jiang H, Zhang T, Wachs JP. Autonomous Performance of Multistep Activities with a Wheelchair Mounted Robotic Manipulator Using Body Dependent Positioning. 2014.
  12. Chung CS, Wang H, Cooper RA. Autonomous function of wheelchair-mounted robotic manipulators to perform daily activities. IEEE Int Conf Rehabil Robot. 2013;1-6.
  13. Kim DJ, Lovelett R, Behal A. Eye-in-hand stereo visual servoing of an assistive robot arm in unstructured environments. ICRA'09. IEEE International Conference. 2009;2326-2331.
  14. Srinivasa SS, Ferguson D, Helfrich CJ, Berenson D, Collet A, Diankov R, et al. HERB: a home exploring robotic butler, Auton Robot. 2010;28:5-20.
  15. Tanaka H, Sumi Y, Matsumoto Y. Assistive robotic arm autonomously bringing a cup to the mouth by face recognition. Advanced Robotics and its Social Impacts (ARSO), 2010 IEEE Workshop. 2010;34-39.
  16.  Corke PI. Visual Control of Robots: high-performance visual servoing: Research Studies Press Taunton. 1996.
  17. Modarress D, Svitek P, Modarress K, Wilson D, Micro-Optical Sensors for Boundary Layer Flow Studies, in ASME 2006 2nd Joint US-European Fluids Engineering Summer Meeting Collocated With the 14th International Conference on Nuclear Engineering. 2006;2:1037-1044.
  18. Wampler CW. Manipulator inverse kinematic solutions based on vector formulations and damped least-squares methods. Systems, Man and Cybernetics. IEEE Transactions. 1986;16(1):93-101.
  19. Nakamura Y, Hanafusa H. Inverse kinematic solutions with singularity robustness for robot manipulator control, Journal of dynamic systems, measurement, and control. 1986;108(3):163-171.
  20. Kemp CC, Edsinger A, Torres J, Challenges for robot manipulation in human environments, IEEE Robotics and Automation Magazine. 2007;14:20.
  21. Ding D, Hyun K, Cooper R, Telson J, Kavita K, Focus Group Evaluation of an Overhead Kitchen Robot Appliance.RESNA, Indianapolis. 2014.
 
Listing : ICMJE   

Creative Commons License Open Access by Symbiosis is licensed under a Creative Commons Attribution 3.0 Unported License