Optimization design

Design optimization of spatial six degree-of-freedom parallel manipulator based on artificial intelligence approaches

Abstract

Optimizing the system stiffness and dexterity of parallel manipulators by adjusting the geometrical parameters can be a difficult and time-consuming exercise especially when the variables are multifarious and the objective functions are too complex. Optimization techniques based on artificial intelligence approaches are investigated as the effective criteria to address this issue. In this paper, genetic algorithms and artificial neural networks are simultaneously implemented as an intelligent optimization tool of the dimensional synthesis problem for spatial six degree-of-freedom (DOF) parallel manipulator. The objective functions of system stiffness and dexterity are derived according to the kinematic analysis of the parallel mechanism. Neural network based standard backpropagation learning algorithm and Levenberg-Marquardt algorithm are utilized to approximate the analytic solutions of system stiffness and dexterity respectively. Genetic algorithms are conducted with the objective functions described by the trained neural networks which are the models of different performance solutions. Multi-objective optimization (MOO) of performance indices are developed by searching the Pareto optimal frontier sets in the solution space. The effectiveness of this method is validated by simulation.

Keywords:

Optimization design; system stiffness; dexterity; genetic algorithms; artificial neural networks

1.Introduction

Compared with the conventional serial manipulators, the parallel manipulators have some significant advantages such as more stiffness and payload, high force/torque capacity, and they also have simpler inverse kinematics, which is an advantage in real-time control. Recently, parallel manipulators have been developed for applications in aircraft simulator [1,2], telescope [3], positioning tracker [4], micro-motion [5-7], and machine tool [8-14]. However, because the theories and technologies for parallel manipulators are still premature, most of parallel manipulators existing today are high-cost machines that provide less accuracy than conventional machines. Therefore, further investigation is demanded to make parallel manipulators more attractive to industry [15].

Since the use of parallel manipulators is currently limited in capabilities for the more extensive applications such as biotechnology and automotive manufacturing, performance improvement has been one of the most important issues which should be adequately considered. The purpose of optimization design is aiming at enhancing the performance indexes by adjusting the structure parameters such as link length, radii of fixed platform and moving platform, and its distance between the center points of the two platforms. The approach is called dimensional synthesis based performance optimization of parallel manipulators. In the optimum design process, several performance indices could be involved for a design purpose, such as stiffness, dexterity, accuracy, workspace and so on.

Many scholars have studied on the issue of optimum design of robot manipulators [16-19]. Zhao et al. [20] exploited the least number method of variables to optimize the leg length of a spatial parallel manipulator for the purpose of obtaining a desired dexterous workspace. Stock and Miller [21] presented a method for multidimensional kinematic optimization of the linear delta robot architecture's geometry. A utility objective function was formulated incorporating two performance indices, including manipulability and space utilization. Kucuk and Bingul [22] optimized the workspace of two spherical three-link robot manipulators using the local and global performance indices. Ceccarelli and Lanni [23] investigated the MOO problem of a general 3R manipulator for prescribed workspace limits and numerically using an algebraic formulation. Artificial intelligence technologies provide the effective approaches to investigate on this topic.

As the primary components of artificial intelligence approaches, genetic algorithms and artificial neural networks play the important roles in various fields of science and technology. In this research, the two methods are simultaneously executed as the optimization criteria of the dimensional synthesis for a symmetrical 6-DOF parallel manipulator. System stiffness and dexterity, as the two main performance indices, will be optimized both as SOO and MOO issues to demonstrate the validity of the proposed integrated artificial intelligence approaches and to improve the manipulation capabilities of parallel manipulators after optimization.

In the present work, there are many parameters and complex matrix computations to be handled with. Hence, it is hard to search for the objective values (optimal configuration) and corresponding structure variables based on the analytical expressions of system stiffness and dexterity. Moreover, with traditional optimization methods, only a few geometric variables could be handled due to the lack of convergence when confronted with more complex problems. Genetic algorithms, as the powerful and broadly applicable stochastic search methods which follow the Darwinian evolutionary principle of “survival-of-the-fittest” that good traits are retained in the population and bad traits are eliminated, can escape from local optima [24]. Therefore, genetic algorithms are competent for addressing the convergence problem in this scenario. Neural networks possess the capability of complex function approximation and generalization by simulating the basic functionality of the human nervous system in an attempt to partially capture some of its computational strengths. Since the solution of objective function must be solved before using genetic algorithms, neural networks will be conducted to represent the expressions of the solutions for the two performance indices for a 6-DOF parallel manipulator. For the MOO problem in design process, since it is impossible to maximize or minimize all the objective function values if they are conflicting with each other, the trade-off algorithm should be executed. This methodology will pave the way for providing not only the effective guidance but also a new approach of dimensional synthesis for the optimal design of general parallel mechanisms.

In what follows, geometric modeling and kinematic analysis is developed in Section 2, and withal solutions of inverse kinematics model and Jacobian matrix is derived. In Section 3, the optimization criteria of system stiffness and dexterity are presented, the models of performance indexes are deduced. In Section 4, the results from the applications of the genetic algorithms and neural networks to optimize the performance indices of the mentioned parallel manipulator are discussed respectively. Both the issues of single objective optimization (SOO) and MOO are addressed. Section 5 gives the conclusion and future work needed.

2.Geometric Modeling and Kinematic Analysis

2.1. Geometric Modeling

In this work, a 6-DOF parallel mechanism and its joint distributions both on the base and on the platform are shown in Figs. 1, 2 and 3. This mechanism consists of six identical extensible links, connecting the fixed base to a moving platform. The kinematic chains associated with the six legs, from base to platform, consist of a fixed Hooke joint, a moving link, an actuated prismatic joint, a second moving link and a spherical joint attached to the moving platform. It is also assumed that the vertices on the base and on the platform are located on circles of radii Rb and Rp, respectively.

A fixed reference frame is connected to the base of the mechanism and a moving coordinate frame is connected to the platform. In 2, the points of attachment of the actuated legs to the base are represented with Bi and the points of attachment of all legs to the platform are represented with Pi, with, while point P is located at the center of the platform with the coordinate of.

The Cartesian coordinates of the platform are given by the position of point P with respect to the fixed frame, and the orientation of the platform (orientation of frame with respect to the fixed frame), represented by three Euler angles, and or by the rotation matrix Q.

2.2. Inverse Kinematics

The inverse kinematics issue is focused on how to deduce the joint motions when the posture of the end-effector is known. If the coordinates of pointin the fixed frame are represented by vector, then we have

(1)

(2)

(3)

(4)

whereis the position vector of point Pi expressed in the fixed coordinate frame whose coordinates are defined as (), is the position vector of point Pi expressed in the moving coordinate frame, and is the position vector of point P expressed in the fixed frame as defined above, and

(5)

(6)

One can then write

, (7)

Where, Q is the rotation matrix from the fixed reference frame to the moving coordinate frame, and Tp, Tb are the angles to determine the attachment points of six symmetric branched-chains on the base and on the platform respectively.

Subtracting vector bi from both sides of Eq. (7), one obtains

, (8)

Then, taking the Euclidean norm on both sides of Eq. (8), one has

, (9)

Where is the length of the ith leg, i.e., the value of the ith joint coordinate. The solution of the inverse kinematic problem for the 6-DOF maniulator is therefore completed and can be written as

(10)

2.3. Jacobian Matrix

Considering the parallel component of the mechanism, the parallel Jacobian matrix can be obtained by differentiating Eq. (10) with respect to time, one obtains

(11)

Since one has

(12)

with

(13)

where, is the angular velocity of the platform. Differentiating Eq. (7), one obtains

(14)

Then, Eq. (11) can be rewritten as

(15)

Hence, one can write the velocity equation as

(16)

where vector t is the twist of the platform, , and vectoris defined as

(17)

and

(18)

(19)

where mi is a vector with 6 components, which can be expressed as

(20)

The linear transformation between motion speed of manipulator and each joint can be defined as the Jacobian matrix of robot. This Jacobian matrix means the drive ratio of motion velocity from the space of joints to the space of end-effecter. According to Eq. (16), Jacobian matrix can be written as

(21)

Therefore, the derivation of the relationship between Cartesian velocities and joint rates is determined.

3. Design Optimization

3.1. Optimization Principles

The goal of structure parameters design, which is also called dimensional synthesis, is to confirm the optimal geometric configuration according to objective function and geometric restriction. To make sure the parallel manipulator will possess well performances such as high system stiffness and dexterity, optimization based dimensional synthesis for is one of the significant steps in the design process of parallel manipulators.

The aim of optimization for parallel manipulators is to maximize the system stiffness, and to minimize the dexterity expressed with condition number of Jacobian matrix. If the two aspects are addressed dividually, then it will be two SOO issues. Contrarily, if the two aspects are considered together, then it will be a MOO issue.

Since only a few geometric parameters can be handled due to the lack of convergence, this arises from the fact that traditional optimization methods use a local search by a convergent stepwise procedure, e.g. gradient, Hessians, linearity, and continuity, which compares the values of the next points and moves to the relative optimal points [25]. Global optima can be found only if the problem possesses certain convexity properties which essentially guarantee that any local optima are a global optimum. In other words, conventional methods are based on a point-to-point rule; it has the danger of falling in local optima.

The genetic algorithms are based on the population-to-population rule; it can escape from local optima. Genetic algorithms have the advantages of robustness and good convergence properties, i.e.

• They require no knowledge or gradient information about the optimization problems; only the objective function and corresponding fitness levels influence the directions of search.

• Discontinuities present on the optimization problems have little effect on the overall optimization performance.

• They are generally more straightforward to introduce, since no restrictions for the definition of the objective function exist.

• They use probabilistic transition rules, not deterministic ones.

• They perform well for large-scale optimization problems.

Genetic algorithms have been shown to solve linear and nonlinear problems by exploring all regions of state space and exponentially exploiting promising areas through mutation, crossover, and selection operations applied to individuals in the population. The greatest potential for the application of evolutionary optimization to real-world problems will come from their implementation on parallel machines, for evolution is an inherently parallel process [26,27].

Although a single population genetic algorithm is powerful and performs well on a wide variety of problems. However, for multi-variable and complicated optimized problems, it is inconvenient to find optima by using limited population size and evolutionary generations [28]. Better results can be obtained by introducing multiple subpopulations. 4 shows the behavior rationale of the extended multi-population genetic algorithm which is adopted in this research.

For the implementation of genetic algorithms, one problem is how to model the objective function. Although genetic algorithms can be performed to search the best solution set without artificial neural networks. It is very difficult and time-consuming especially when the parameters are multifarious and the objective functions are too complex that genetic algorithms cannot work well based on the analytical expression of the performance indices, especially for the case of MOO. Neural networks will be applied to deal with this problem. The error of the output of neural networks is constrained in a minimal threshold value that will not affect the computing accuracy with CPU.

Artificial neural networks are massively parallel adaptive networks of simple nonlinear computing elements called neurons which are intended to abstract and model some of the functionality of the human nervous system to simulate its powerful computation ability. A network, which can be viewed as a weighted directed graph where artificial neurons are basic elements and directed weighted edges represent connections between neurons, must be used to calculate the training error for the cost function. This is often done by performing a random initialization of weights and by training the network using one of the most commonly used learning algorithms, such as backpropagation [29,30]. The basic element of neural networks is illustrated in 5.

Following equation describes the relationship of inputs and output for the artificial neuron,

(22)

where fi is the transfer function, i.e. sigmoid function, xt is the input signal which is the corresponding output from anterior neuron, wt is the weight value of related input signal, bi is the bias whose weight value is 1, is the threshold value. The whole structure of neural networks is organized with many neurons in order. Corresponding learning rule should be confirmed for the training process.

The main function of neural networks is to establish the complex nonlinear relationship between inputs and outputs without deducing the mathematics expression. This is so-called black box. The primary important properties of neural networks that have made them popular for real world applications are the capacity for associative recall, and the capability for function approximation.

Further more, neural networks are widely noted for model free estimation capability in the sense that they are able to create internal representations only through training example sets, without being supplied a mathematical model of how outputs depend upon inputs. Sometimes this is called similarity based generalization. Using powerful learning algorithms, they are able to approximate functions by sifting through vast repositories of data. Because of their learning capability, they are referred to as adaptive function estimators. Therefore, they can be utilized to represent the expressions of system stiffness and dexterity for the 6-DOF parallel manipulator.

3.2. Solution of System Stiffness

From the viewpoint of mechanics, the stiffness is the measurement of the ability of a body or structure to resist deformation due to the action of external forces. The stiffness of a parallel mechanism at a given point of its workspace can be characterized by its stiffness matrix. This matrix relates the forces and torques applied at the gripper link in Cartesian space to the corresponding linear and angular Cartesian displacements.

Two main methods have been used to establish mechanism stiffness models. The first one is called matrix structural analysis, which models structures as a combination of elements and nodes. The second method relies on the calculation of the parallel mechanism's Jacobian matrix which is adopted in this research.

The parallel mechanisms considered here are such that the velocity relationship can be written as in Eq. (23),

(23)

where is the vector of joint rates, and is the vector of Cartesian rates-a six dimensional twist vector containing the velocity of a point on the platform and its angular velocity. Matrix J is usually termed Jacobian matrix which is described in Eq. (21).

The stiffness matrix of the mechanism in the Cartesian space is then given by the following expression

(24)

Here KJ is the joint stiffness matrix of the parallel mechanism, with, where each of the actuators in the parallel mechanism is modeled as an elastic component. ki is a scalar representing the joint stiffness of each actuator, which is modeled as linear spring.

Particularly, in the case for which all the actuators have the same stiffness, i.e.,, then Eq. (24) will be reduced to

(25)

Further more, the diagonal elements of the stiffness matrix are used as the system stiffness value. These elements represent the pure stiffness in each direction, and they reflect the rigidity of machine tools more clearly and directly. The objective function for system stiffness optimization can be written as:

(26)

Where, for, represents the diagonal elements of the mechanism's stiffness matrix, is the weight factor for each directional stiffness which characterizes the priority of the stiffness in this direction. For the optimization of system stiffness, it should be a maximum.

3.3. Solution of Dexterity

The condition number of the Jacobian matrix will be a measure of dexterity indices for the 6-DOF parallel manipulator,

(27)

where is the condition number of the Jacobian matrix and is defined as

(28)

where means the norm of related vector or matrix. If Frobenius norm is considered, then

(29)

Here Frobenius norm is defined as the extracting roots of the quadratic sum of each element in the Jacobian matrix. Hence the dexterity indices can be deduced as

(30)

If spectral norm is introduced, the dexterity indices will be described as

(31)

where and stand for the maximum and minimum singular values of Jacobian matrix J, respectively. Regarding the computing time of optimization process, this expression is selected as the objective function for the optimization of dexterity. The value of , which is directly related to singular values of Jacobian matrix, is between 1 and positive infinity. All the singular values of the Jacobian matrix will be the same and the manipulator is isotropic if is equal to 1. While if is prone to positive infinity, then Jacobian matrix is singular. Therefore, for the optimization of dexterity, it should be a minimum when condition number is considered.

For the SOO issues, these two objective functions should be considered respectively. For the MOO issue, the two goals might be in conflict, so a trade-off strategy should be considered in the optimizing process. The integrations of neural networks, genetic algorithms and Pareto approach can be viewed as one type of Pareto evolutionary neural networks to search the optimal solution sets of MOO.

3.4. Multi-objective Genetic Algorithms

Three main multi-objective genetic algorithms will be introduced as follows:

1. Clustering function approach

The main principle of this approach is converting the problem of MOO into SOO through distributing the weighting factors of different objective function values.

For instance, there are objective functions, it gets:

(32)

Where

Then the multi-objective issue becomes the single objective one. Typical application of clustering function approach can be found in [31].

2. Population based approach

Population based approach originates from the conception of population division [32]. Coevolution including competition and cooperation is the basic feature of this approach. However, there are still no effective methods so far for population decomposition; sub-population size determination, representative selection and their application modality and domains need to be expanded. The gist of decomposing co-evolutionary population is given based on piecewise interval correlation by iteration linkage learning method. For multi-population genetic algorithm, the strategy for dynamical change of search area is according to the distribution of the best individual of each population. The adaptive adjustment method of the population size is based on the search area's measure and the search granularity. Typical application of population based approach can be found in [33].

3. Pareto based approach

The concept of Pareto method was originally introduced by Francis Ysidro, and then generalized by Vilfredo Pareto [34]. A solution belongs to the Pareto set if there is no other solution that can improve at least one of the objectives without degradation any other objective. Consider a set of solutions which provides maximum information about the optimization for multi-objective problems. By comparing each solution to every other solution, those solutions dominated by any other for all objectives are flagged as inferior [35-37]. In the issue of maximizing the k objective functions, decision vector (sets of variable) x*∈F is the Pareto-optimal solution if no other decision vectors satisfy both the following conditions:

(33)

(34)

In like manner, if both the following conditions are true, decision vector x dominates y in the maximization issue, noted by. That is:

(35)

(36)

According to the Eq.(33) ~ Eq.(36), Pareto-optimal set can be defined as: if there is no solution in the search space which dominates any member in the set P, then the solutions belonging to the set P constitute a global Pareto-optimal set. Typical application of Pareto based approach can be found in [38]. This method is adopted in this research.

4. Simulation

4.1. Objective Optimization of System Stiffness

In order to obtain the maximum system stiffness of the spatial 6-DOF parallel manipulator as shown in Figs. 1 and 2, five architecture and behavior parameters are used as optimization parameters, the vector of optimization variables is

(37)

where Rp, Rb are the radius of the moving platform and the base respectively, z is the height of the platform, Tp, Tb are the angles to determine the attachment points on the base and on the platform respectively, and their bounds are shown in Table 1.

Table 1: Variables of structure parameters

Rp

(0.05, 0.1) m

Rb

(0.12, 0.22) m

z

(0.16, 0.26) m

Tp

(18°, 28°)

Tb

(38°, 48°)

The standard backpropagation learning algorithm, as the most popular training method for feedforward neural network, is based on the principle of steepest descent gradient approach to the minimization of a criterion function representing the instantaneous error between target and the predicted output. The criterion function can be expressed as follows:

(38)

where, is the vector of desired network output, and is the vector of actual output. N is the vector dimension. In this scenario, the five geometrical parameters of this 6-DOF parallel manipulator are chosen as the inputs of feedforward neural network, and the performance index is used as the output.

The basic training step of neural network with standard backpropagation algorithm is:

A) Initialize the weights and bias in each layer with small random values to make sure that the weighted inputs of network wouldn't be saturated.

B) Confirm the set of input/output pairs and the network structure. Set some related parameters, i.e. the desired minimal, the maximal iterative times and the learning speed.

C) Compare the actual output with desired network response and calculate the deviation.

D) Train the updated weights based on criterion function in each epoch.

E) Continue the above two steps until the network satisfies the training requirement.

6 shows the topology of the neural network developed as the objective function to model the analytical solution of system stiffness. In this case, two hidden layers with sigmoid transfer function (see 5) are established in which 16 neurons exist in each hidden layer. The input vectors are the random arrangement of discretization values from the five structure variables.

7 illustrates the training result with standard backpropagation learning algorithm, where the blue curve denotes the quadratic sum of output errors with respect to ideal output values. After training for 206 times, target goal error is arrived.

The genetic algorithm can be implemented to search for the best solutions after the trained neural network is ready for the objective function. 8 describes the evolution of the best individual for 40 generations. The optimal system stiffness value is 6060.

The evolution of the best individuals of population (architecture and behavior variables) in the implementation process of genetic algorithm is shown in 9. By adjusting the five parameters simultaneously, optimization results of system stiffness are obtained. After 40 generations, they are convergent as follows:

4.2. Objective Optimization of Dexterity

For the optimization of dexterity, some differences with the above section can be found. Since the analytical expression of dexterity indices is more complex than the case of system stiffness, computing time and error accuracy should be well considered in priority for the design of the neural network's structure and the choice of related learn algorithm.

As shown in 10, the neural network containing a single hidden layer with 100 neurons is developed. Using more neurons and less hidden layers are for jointly considering the computing time and error accuracy. One improved training method called Levenberg-Marquardt algorithm, which disregards nuisance directions in the parameter spaces that influence the criterion marginally, is introduced to advance the learning rate. Besides, it still performs an update close to the efficient Gauss-Newton directions within the subset of the important parameters [39]. The core of the LM algorithm is the calculus and inversion, at each training cycle, of an approximation to the Hessian [40]. This method improves the solution to problems that are much harder to solve by only adjusting the learning rate repeatedly. It can be described as:

(39)

(40)

where, is the vector constituted by all the weights and bias of the neural network, is the searching direction in vector space constituted by each component of X, is the step index to make minimizing in the direction of . Levenberg-Marquardt algorithm is the integration of steepest descent gradient approach and Newton method which is a common numerical optimization method. Its searching direction is

(41)

where, H is Hessian matrix, I is an identity matrix, and produces a conditioning effect which is chosen automatically until a downhill step is produced for each epoch. In the case of dexterity optimization, this fast modified algorithm of backpropagation is adopted.

11 describes the training result using feedforward neural network with Levenberg-Marquardt learning algorithm, where the blue curve denotes the quadratic sum of output errors with respect to ideal output values. As can be seen from the , the target error 10-8 is much smaller than the training result of system stiffness, which is 10-5. Besides, after training for 82 epochs, target error has been arrived. It can be observed that the iterative times are decreased while the computational precision is improved in large scale using this fast learning algorithm for the evolvement of neural network.

The optimization process of dexterity based on condition number of the Jacobian matrix is illustrated in 12. After global stochastic search for 40 generations, optimal dexterity value is convergent at 353.1

The evolution of architecture and behavior parameters of the 6-DOF parallel manipulator for dexterity optimization is described in 13. Compared with 9, it can be found that the corresponding convergent points of the five parameters in these two s are not the same, i.e.

In other words, the two objective functions are conflicted with each other. This issue will be addressed in the following section where MOO based on Pareto-optimal solution is addressed.

4.3. Multi-Objective Optimization

MOO problems consist of simultaneously optimizing several objective functions that are quite different from those of SOO. One single global optimal search is enough for SOO task. However, in a MOO problem, it is required to find all possible tradeoffs among multiple objective functions that are usually conflicting with each other. The set of Pareto optimal solutions is generally used for decision maker.

Following initial parameters of Pareto based genetic algorithms are set before implementation:

Number of sub-population = 5

Number of individuals in each sub-population = {50, 30, 30, 40, 50}

Mutation range = 0.01

Mutation precision = 24

Max generations for algorithm termination = 300

After optimization,the possible optimal solutions in the whole solution space are obtained without combining all the objective functions into a single objective function by weighting factors. 14 shows the Pareto optimal frontier sets in which the designers can intuitionistically determine the final solutions depending on their preferences. Hence, the analysis process and cycle time is reduced in large scale. From this picture, trade-off between the objectives of system stiffness and dexterity is demonstrated in the distributing trend of these Pareto points for compromisingly selecting. If any other pair of design variables is chosen from the lower/right area in 14, its corresponding values will locate an inferior point with respect to the Pareto frontier. Besides, the upper/left side is the inaccessible area of all the possible solution pairs. That's the reason why Pareto solutions are called Pareto-optimal frontier sets. Furthermore, there are six optimum design points named from “a” to “f” whose corresponding objective values and design parameters are shown in Table 2.

Table 2: Objective functions values from Pareto sets and corresponding design variables

dexterity/system stiffness

Rp(m)

Rb(m)

z(m)

Tp(rad)

Tb(rad)

a

357.62/6055.2

0.1

0.20928

0.16064

0.31416

0.83762

b

384.82/6056.4

0.1

0.19281

0.16

0.31416

0.83448

c

452.65/6058

0.1

0.17055

0.16021

0.31442

0.83775

d

555.44/6059.1

0.1

0.1498

0.16149

0.31416

0.83775

e

749.62/6059.8

0.1

0.12739

0.16426

0.31483

0.8377

f

1000.6/6060

0.1

0.12006

0.17632

0.31956

0.82563

5. Conclusions

Performance optimization is one urgent issue for the more extensive applications of parallel manipulators in industry. By adjusting the architecture and behavior parameters, the optimal performance indices such as system stiffness and dexterity of the 6-DOF parallel mechanism can be searched. Since only a few geometric parameters could be handled due to the lack of convergence of the optimization algorithm for complex objective functions with traditional optimization methods, neural networks and genetic algorithms as the two important artificial intelligence approaches are implemented simultaneously as the optimization guidelines in this paper. Single objective optimizing issues and multi-objective optimizing issue are both addressed to demonstrate the validity of the artificial intelligence approaches. These two approaches are compatible in nature. Especially for the issue of MOO, Pareto evolutionary neural networks which has high efficiency, high generalization and programmable-feature, no matter whether the objective functions have an analytic solution or not. Future work will focus on comprehensive MOO problem for complicated parallel mechanisms to realize the simultaneous optimization of more performance indices such as system stiffness, workspace, dexterity and manipularity etc. The physical parallel manipulator is under development using the proposed performance evaluation criteria.

References

[1] Steward D. A platform with six degrees of freedom. Proc Inst Mech Eng 1965; 180(5):371-86.

[2] Hostens I, Anthonis J, Ramon H. New design for a 6 dof vibration simulator with improved reliability and performance. Mech Syst Signal Proc 2005; 19(1):105-22.

[3] Carretero JA, Podhorodeski RP, Nahon MN, Gosselin CM. Kinematic analysis and optimization of a new three degree-of-freedom spatial parallel manipulator. ASME J Mech Des 2000; 122(1):17-24.

[4] Dunlop GR, Jones TP. Position analysis of a two DOF parallel mechanism-Canterbury tracker. Mech Mach Theory 1999; 34(1):599-614.

[5] Lee KM, Arjunan S. A three-degrees-of-freedom micromotion in-parallel actuated manipulator. IEEE Trans Rob Autom 1991; 7(5):634-41.

[6] Jensen KA, Lusk CP, Howell LL. An XYZ micromanipulator with three translational degrees of freedom. Robotica 2006; 24(3):305-14.

[7] XuQS, Li YM. A novel design of a 3-PRC compliant parallel micromanipulator for nanomanipulation. Robotica 2006; 24(4):527-8.

[8] Fedewa D, Mehrabi MG, Kota S, Orlandea N, Gopalakrishran V. Parallel structures and their applications in reconfigurable machining systems. In: Proceedings of parallel kinematic machines-international conference, 2000. p. 87-97.

[9] Nguyen CC, Zhou ZL, Bryfogis M. A Robotically assisted munition loading system. J Robot Syst 1995; 12(12): 871-81.

[10] Yang GL, Chen IM, Chen WH, Lin W. Kinematic design of a six-DOF parallel-kinematics machine with decoupled-motion architecture. IEEE Trans Rob Autom 2004; 20(5):876-84.

[11] Li YM, Xu QS. Stiffness analysis for a 3-PUU parallel kinematic machine. Mech Mach Theory 2008; 43(2)186-200.

[12] Chen SL, Chang TH, Lin YC. Applications of equivalent components concept on the singularity analysis of TRR-XY hybrid parallel kinematic machine tools. Int J Adv Manuf Technol 2006; 30(7-8):778-88

[13] Refaat S, Herve JM, Nahavandi S. Two-mode overconstrained three-DOFs rotational-translational linear-motor-based parallel-kinematics mechanism for machine tool applications. Robotica 2007; 25(4):461-6.

[14] Zhang D, Xi F, Mechefske C, Sherman YT. Analysis of parallel kinematic machines with kinetostatic modelling method, Robot Comput- Integr Manuf 2004, 20(2):151-65.

[15] Staicu S, Zhang D. Dynamic modeling of a 4-DOF parallel kinematic machine with revolute actuators, Int J Manuf Res 2008; 3(2):172-87.

[16] Zhang D, Wang L, Esmailzadeh E. PKM capabilities and applications exploration in a collaborative virtual environment, Robot Comput- Integr Manuf 2006; 22(4):384-95.

[17] Bergamaschi PR, Nogueira AC. Saramago FP. Design and optimization of 3R manipulators using the workspace features. Appl Math Comput 2006; 172(1):439-63.

[18] Rout BK, Mittal RK. Parametric design optimization of 2-DOF R-R planar manipulator: A design of experiment approach. Robot Comput- Integr Manuf 2008; 24(2):239-48.

[19] Yu A, Bonev IA, Paul ZM. Geometric approach to the accuracy analysis of a class of 3-DOF planar parallel robots. Mech Mach Theory 2008; 43(3):364-75.

[20] Zhao JS, Zhang SL, Dong JX, Feng ZJ, Zhou K. Optimizing the kinematic chains for a spatial parallel manipulator via searching the desired dexterous workspace. Robot Comput- Integr Manuf 2007; 23(1):38-46.

[21] Stock M, Miller K. Optimal kinematic design of spatial parallel manipulators: application to linear delta robot. ASME J Mech Des 2003; 125:292-301.

[22] Kucuk S, Bingul Z. Robot workspace optimization based on a novel local and global performance indices. In: IEEE International Symposium on Industrial Electronics, 2005. p.1593-8.

[23] Ceccarelli M, Lanni C. A multi-objective optimum design of general 3R manipulators for prescribed workspace limits. Mech Mach Theory 2004; 39(2):119-132.

[24] Holland JH. Adaptation in natural and artificial systems. The University of Michigan Press, Ann Arbor, MI, 1975.

[25] Gosselin CM, Guillot M. The synthesis of manipulators with prescribed workspace, ASME J Mech Des 1991; 113(1):451-5.

[26] Fogel D B. An introduction to simulated evolutionary optimization. IEEE Trans Neural Netw 1994; 5(1):3-14.

[27] Mininno E, Cupertino F, Naso D. Real-valued compact genetic algorithms for embedded microcontroller optimization. IEEE Trans Evol Comput 2008; 12(2):203-19

[28] Gong DW, Hao GS, Zhou Y, Sun XY. Interactive genetic algorithms with multi-population adaptive hierarchy and their application in fashion design. Appl Math Comput 2007; 185(2):1098-108.

[29] Liu YY, Starzyk JA, Zhu Z. Optimized approximation algorithm in neural networks without overfitting. IEEE Trans Neural Netw 2008; 19(6): 983-95.

[30] Ludermir TB, Yamazaki A, Zanchettin C. An optimization methodology for neural network weights and architectures. IEEE Trans Neural Netw 2006; 17(6): 1452-9.

[31] Zhang D, Wang L, Lang S. Parallel kinematic machines: design, analysis and simulation in an integrated virtual environment. ASME J Mech Des 2005; 127(7):580-8.

[32] Potter MA, De Jong KA. Cooperative coevolutionary architecture for evolving coadapted subcomponents. Evol Comput 2000; 8(1):1-29.

[33] Pedrajas NG, Martınez CH, Perez JM. Multi-objective cooperative coevolution of artificial neural networks. Neural Netw 2002; 15(1):1259-78.

[34] Coello CA, Veldhuizen DA, Lamont GB. Evolutionary algorithms for solving multi-objective problems. Kluwer Academic Publishers, New York, 2002.

[35] Baumgartner U, Magele Ch, Renhart W. Pareto optimality and particle swarm optimization. IEEE Trans Magn 2004; 40(2):1172-5.

[36] Papila M, Haftka RT, Nishida T, Sheplak M. Piezoresistive microphone design pareto optimization: tradeoff between sensitivity and noise floor. J Microelectromech Syst 2006; 15(6):1632-43.

[37] Fieldsend JE, Singh S. Pareto evolutionary neural networks. IEEE Trans Neural Netw 2005;16(2):338- 54.

[38] Gupta S, Tiwari R, Nair SB. Multi-objective design optimization of rolling bearings using genetic algorithms. Mech Mach Theory 2007; 42(10):1418-43.

[39] Ngia LSH, Sjoberg J. Efficient training of neural nets for nonlinear adaptive filtering using a recursive Levenberg-Marquardt algorithm. IEEE Trans Signal Process 2000; 48(7):1915-27.

[40] Toledo A, Pinzolas M, Ibarrola JJ, Lera G. Improvement of the neighborhood based Levenberg-Marquardt algorithm by local adaptation of the learning coefficient. IEEE Trans Evol Comput 2005; 16(4):988-92.

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!