Five traditional senses for human

Five traditional senses for human

1. Introduction

For every human, there are five traditional senses: hearing, sight, touch, smell and taste. From the19th century, most inventions and researches were concentrate on how to record and analysis these senses, especially for hearing and sight. And the inventions of telephone and television provide methods of recording and transmitting these two kinds of senses. Because of the fast development of computer technology in the 20th century, the digital media opened the new research fields of audio and video technologies. Since 90 percent of human perception comes from the senses of hearing, sight and touch. Therefore, if people want to build a highly immersive virtual world, there must be a technology to enable people to touch and feel virtual objects in virtual environments. Nowadays, the technology of simulating touch feeling is called haptics.

The term haptic comes from the Greek word haptesthai meaning “to touch”, and in science it indicates the study of the sense of touch. Hapitc refers to the science of perception and manipulation of objects in an environment. And the objects and environment can be real, virtual or a combination of both.

Before 1990s, the research of haptic is mainly based on the applications of various robots, like tele-operational robots. Perhaps the first haptic system aimed to compute the interaction forces between geometric objects emerged in the early 1990s . During the same time, various haptic devices appear with different structures and designs. And a stylus-based haptic interface was designed in 1994. This was the PHANToM haptic device which latterly been commercialized and become one of the most popular haptic devices in research areas. Nowadays, in the 21st century, the price of haptic device has reduced greatly. In 2006, the world's first consumer 3D haptic device "Falcon" has been launched by Novint Technologies, and it has been widely used in video games.

After the creation of hapitc devices, the relevant control approaches and force feedback algorithms become a research field. Haptic rendering technology refers to the methods used to compute and generate forces and torques in response to interactions. For the basic haptic rendering of objects, in the 1990s, algorithms are used to compute the force components of three directions at the probe's tip . Zillles and Salisbury has proposed a constraint-based God-object method for point-based haptic rendering. Rendering algorithms that follow this point-based contact are called 3-DoF haptic rendering algorithms, since only the positional values of 3 axes are provided . In the 2000s, the new method called 6-DoF haptic rendering appears, and it can provide both 3D force and torque feedback. It gives users much dexterity to feel, explore and manipulate the objects in the virtual environment. Recently, 6-DoF haptic rendering algorithms also have been used for the haptic contact of both rigid and deformable models.

The existing haptic rendering approaches have reduced complexity of computation and provided feasible solutions for many applications with 3-DoF and 6-DoF haptic devices, such as: medical training and simulation, 3D games , painting, molecular docking and educations . In each of these applications, the interactions between haptic interface and objects are mostly based on point-surface contact.

The future development trends to use object-based haptic rendering method to simulate the contacts involving both forces and torques. Moreover, as combination of computer graphic and haptic rendering technology, the haptic system can provide more realistic and immersive touch feeling for users, like the feeling of water and deformable objects. Therefore, for higher realistic simulation, the haptic system must use more effective and stable algorithms correspond with haptic devices.

2. Haptic Devices

Haptic devices or interfaces could be regard as small mechanical devices that provide communication between computer and human by energy exchange. As one kind of bidirectional communication device, both input and output functions can be realized through haptic device simultaneously.

As an input device, users can use haptic device as a tool to manipulate 3D objects in virtual environment or tele-oportation applications. As an output device, force feedback could be exerted to the users and allows users to have a feeling of touch . Examples of haptic devices include common consumer peripheral devices equipped with special motors and sensors (e.g., force feedback joysticks and steering wheels) and more sophisticated devices designed for research, medical and industrial applications (e.g., PHANTOM and Falcon devices).

2.1 Designs Based on Kinematic structure.

In research and industry application, majority structure of haptic devices is kinematic design which can provide three or six degree-of-freedom. They can be mainly classified into two categories: serial structure

Serial Structure:

Serial mechanisms don't include any passive joint, and all actuators are in serial order within one single kinematic chain. For instance, the popular SensAble PHANTOM haptic device is serial structure design and composed with a set of actuators and robotic arms

Parallel Structure:

Parallel structures give the possibility to place all actuators at the frame, minimizing the moving masses. The small inertia makes them highly relevant for haptic applications. For example, show in Fig. 3, Novint Falcon haptic device is an entirely new kind of commercial haptic device which could be used in game control. Parallel design is successfully used in this device and can provide users a true virtual touch completely different from game controller in history. When the user holds a gun, the gravity can be simulated and user can feel the weight of the gun. When users fight in the game, the vibration and collision are generated so that users can feel the shooting of guns and fighting of the swords.

2.2 Designs Based on Virtual Representation

Beside the classification based on the Kinematic structure. There are some other haptic devices designed for the specific applications. For haptic feedback, it could be either a force exerted on user's hand or a texture feeling implemented on one finger of hand. Therefore, haptic devices can be classified by their representations in virtual environment.

2.2.1 Vector Representation

Haptic device is commonly represented as a virtual point and the position of the point is represented by the coordinate figures along three axes. To calculate the force feedback, a vector which starts form the position of the haptic point is used to represent the force. The direction and magnitude of vector correspond to the direction and magnitude of force respectively. For this kind of application, the current three degree-of-freedom haptic devices are capable enough to be represented as a vector in 3D scene and transmit the force feedback to the user. The PHANTOM Omni and Movint Falcon are commercial and typical three degree-of-freedom haptic devices which can be used in this kind of application.

2.2.2 Object Representation:

Nowadays, as the development of the computer graphics, the virtual environments are becoming more and more complicated. The simple point representation of haptic device can not satisfy the requirement of sophisticated manipulation. For this purpose, six degree-of-freedom haptic devices are developed to manipulate complex tools or objects in virtual environment. Since that not only movement along three axes but also rotation around axes can be provide by this kind of haptic devices. Six degree-of-freedom haptic devices could widely be used in medical training system, virtual assembly, and research area. One popular this kind of device is SensAble PHANTOM Premium 6DOF Devices which provide force feedback in three translational degrees of freedom. Additionally, the devices provide torque feedback in three rotational degrees of freedom in the yaw, pitch and roll directions.

For example, in virtual assembly path, the torques simulation makes it possible to feel the reaction forces and torques produced by collisions between objects. While the user manipulates an actual physical object of interest using a 6-DOF haptic device, as show in Fig. 4, the user can have a realistic and intuitive feeling of contact and make the assembly process to be more reasonable and effective.

In virtual or tele-operation environment, haptic devices provide a convenient and intuitive method of robot driving and manipulation, shown in Fig. 5. Operator can control a remote robot and feel the accurate touch force and rotational torque supported by the robot in real-time. Also the force and torque feedback allow the operator to better interact with the environment as well as feel the limits-of-travel. This is very important for the robots which complement maintenance tasks in hazardous environments.

In sugary simulation, as shown in Fig. 6, the six degree-of-freedom haptic device can be used as a drilling tool in a bone surgery simulation. While the user is drilling the bone, the drilling torques is provided around the drilling direction. All these accurate force and torque feedback make the training process for medical students to be more realistic and faster.

2.2.3 Hand and Body Representation

To have more accurate manipulation and force feedback, there are lots of exoskeleton devices have been developed. These can be worn tightly on hand or body. One example show in Fig. 7 is CyberGrasp of Immersion Company, the CyberGrasp is a lightweight, disencumbering force-reflecting exoskeleton that fits over a CyberGlove and adds resistive force feedback to each finger. With the CyberGrasp force feedback system, users are able to explore the physical properties of computer-generated 3D objects they manipulate in a simulated 'virtual world' . This kind of haptic device offers enormous applications, including virtual training, medical applications, computer-aided design and tele-operation

2.2.4 Surface texture Representation

There are some kinds of haptic-tactile devices which used to prove a feeling of surface texture to users through the finger contact. Basically, these haptic-tactile devices consist of matrix arrayed pins. However, there are lots of distinct methods to drive the pins. The existing methods include thermo-pneumatic, fixation aggregate state (Fig. 8), piezoelectric actuator and so on. Compare with the previous devices, haptic-tactile devices could provide a better tactile resolution for the feeling of 3D images and videos.

3. Haptic Rendering Concepts

Haptic rendering technology refers to the process of using algorithms and approaches to compute and generate forces and torques in response to interactions between haptic interface and virtual objects in the virtual environment . For the basic haptic rendering of objects, in the 1990s, the haptic device is represented with a point probe in the virtual world. The algorithms are used to compute the force components of three directions at the probe's tip . Zillles and Salisbury has proposed a constraint-based God-object method for point-based haptic rendering. This god-object method creates a virual model of the haptic interface and allows a user to intuitively control the point probing the virtual objects . Rendering algorithms that follow this point-based contact are called 3-DoF haptic rendering algorithms, since only the positional values of 3 axes are provided .

Although the point-based interaction greatly simplifies both device and algorithm development while permitting bandwidth and force fidelity. However, this kind of haptic rendering can only provide point force feedback to users. For many applications like surgery simulation, molecular docking, and scientific exploration require the ability to simulate the object-based haptic interaction. This problem is called 6-DoF haptic rendering, because it can provide both 3D force and torque feedback. It gives users much dexterity to feel, explore and manipulate the objects in the virtual environment. The basic steps for the computation of 6-DoF haptic rendering involve collision detection, contact manifold computation, penetration depth estimation and forces computation . And some algorithms using localized contact computations have been proposed . Recently, 6-DoF haptic rendering algorithms also have been used for the haptic contact of both rigid and deformable models.

Haptic rendering is a bidirectional interactive activity which is completely different from both audio and video technologies. The realization of haptic rendering mainly faces two challenges: high update rates and high computational cost. Depending on the different applications and requirements the algorithm should be different, so as to achieve a balance between these two problems. As for simulation of painting and sculpture, the accuracy is the most important component of haptic rendering. However, for collaborative haptic network, the stability of haptic network and update of data communication are more important.

For the development of haptic rendering, the research will focus on the new effective 6-DoF haptic rendering algorithms which provide feedback of both forces and torques. Also the collaborative virtual environment where the participants can have haptic interaction with each other has a bright future in internet application. Users can simultaneously manipulate the same object and feel the properties of objects from different user sides. The research could have applications in 3D online games, e-learning and medical training.

3.1 Basic Principle of Haptic Rendering

Generally, a haptic rendering algorithm is mainly composed of two parts: collision detection and collision response, the relationship between two parts is shown in Fig.9.

3.1.1 Collision Detection

When the user manipulates the probe of haptic device, the data of orientation and position are acquired at a rate of 1 KHz in real time. Then based on the data, the collisions with virtual objects in 3D environment are detected.

Although collision detection methods have been extensively studied in computer graphics, these algorithms are not designed for the applications of hapitc devices. So the collision detection merely based on 3D objects in virtual environment is not enough for the simulating of force feedback of haptic interactions. How the collision occurs and how it evolves over time are crucial factors in haptic rendering to accurately compute the interaction forces that will be reflected to the user through the haptic device.

3.1.2 Collision Response

If a collision is detected by computer, then the interaction forces are computed using preprogrammed rules for collision response. To proviede operator with tactual representation of 3D objects and texture details, the response force is conveyed to users through haptic devices.

Nowadays, most of existing collision response is used for the interactions of rigid objects. In these applications, the force response algorithms are merely based on the mass-spring damper model. For other applications, like hydromechanics, electromagnetic and biomolecule docking the response forces should based on the different algorithms.

3.2 Haptic Interaction Methods

The existing interaction methods for haptic rendering can be distinguished based on the way the probing object is modeled: point-based haptic interaction, line-based haptic interaction, object-based haptic interaction.

3.2.1 Point-based Haptic Interaction:

In point-based haptic interactions, the interaction and collision only happen between the haptic interface point (HIP) and the surface of virtual objects, see Fig. 10. A. The haptic interface point is represented to be a minimize point in 3D environment. When the user moves the probe of haptic device, the collision detection algorithm will check to see if the HIP is inside or outside the virtual object.

When collision happens, there will be a invisible Ideal Haptic Interface Point (IHIP, also known as god-object, proxy point, or surface contact point) stay on the surface of the object, shown in the Fig. 11. Then the algorithm will calculate the depth of indentation as the distance between two points and the force can be calculated based on the following equations:

Whereis the stiffness of the surface, and is the distance between the real and virtual haptic interface point.

For exploring the shape and surface properties of objects in virtual environments, point-based methods are probably sufficient and could provide the users with similar force feedback as what they would feel when exploring the objects in real environments with the tip of a stick. Using a point-based haptic rendering technique, polyhedrons, implicit surfaces, and volumetric objects have been successfully rendered. Point-based methods, however, are not capable of simulating more general tool-object interactions that involve single or multiple objects in contact with the tool at arbitrary locations of the tool. In such a context, both forces and torques displayed to the user need to be independently computed.

3.2.2 Ray-based Haptic Interaction:

In ray-based haptic interactions, the generic probe of the haptic device is modeled as a line-segment whose orientation is taken into account, and the collisions are checked between the finite line and the objects. This approach enables the user to touch multiple objects simultaneously. In addition to forces, torque interactions can be simulated, which is not possible with the point-based algorithms, shown in Fig. 10. B.

Ray-based rendering can be considered as an approximation to long tools and as an intermediate stage in progress towards the full interaction between 3D cursor and 3D objects. Also, if the geometric model of the probing object can be simplified to a set of connected line segments, then ray-based rendering technique can be used and will be faster than simulation of full 3D objects interactions. However, if the probing object has a complex geometry and cannot be easily modeled using a set of line segments, simulation of six degree of freedom object-object interactions has to be considered.

3.2.3 Object-based haptic interacrion:

In object-based haptic interactions, the haptic interface point is represented by one 3D object which can be manipulated in six degree-of-freedom and provide force and torque feedback. The simulation of haptic interactions between two 3D objecs or tools is desirable for many applications, but this is computationally more expensive than the point-based and ray-based interactions (see Fig. 10. C). Although a single point is not sufficient for simulating the force and torque interactions between two 3D virtual objects, a group of points, distributed over the surface of probing object, has been shown to be a feasible solution. For example, McNeely simulated the touch interactions between 3D model of a teapot and a mechanical assembly.

3.3 Related Work

3.3.1 Collision Detection

The collision detection approaches have been well studied in computer graphics and computer animation. In haptics, the basic ideas of collision detection are same with computer graphics. Most of the previous works can be categorized based on the types of triangle models used to represent objects: convex polytopes and general polygonal models.

Various techniques have been developed for convex polytopes, based on linear programming, incremental computation, feature tracking and multi-resolution methods. For general polygonal models, bounding volume hierarchies (BVH) have been widely used for collision detection and separation distance queries. Different hierarchies differ based on the underlying bounding volume or traversal schemes. These include the AABB trees , OBB trees , sphere trees , Swept Sphere Volumes , and convex hull-based trees.

Beside triangle models, the point sets models are becoming popular to represent large models in computer graphics thanks to the wide use of cheap 3D scanning devices. Moreover, points are considered as a more efficient way of representing complex geometric in geometric models than triangles. Therefore, some haptic rendering algorithms using point set surface have been proposed. This kind of collision detection algorithm is used in haptic-visual virtual environments containing highly detailed, rigid, and dynamic models that are manipulated by haptic devices.

3.3.2 Penetration Depth Computation

A few efficient algorithms have been proposed to compute the penetration depth (PD) between convex polytopes. Dobkin et al. computed the directional PD using Dobkin and Kirkpatrick polyhedral hierarchy . Agarwal et al. used a randomized approach to compute the PD .

Given the complexity of PD computation, a number of approximation approaches have been proposed to estimate them efficiently. This algorithm could compute the upper and lower bounds for PD of convex polytopes. This method also could be improved by expanding a polyhedral approximation of the Minkowski sum of two polytopes. There are also some other approximation approaches based on discretized distance fields.

3.3.3 Contact Force Computation

Depending on the various applications, different approaches have been proposed for contact force computation. Constraint-based dynamics can provide a more accurate responses force, however, it is not suitable for complex haptic rendering. Penalty-based methods are commonly used for fast computation of contact forces which depend on the inter-penetration between the objects. But the main problem with penalty methods is that they can cause instabilities or unwanted vibration when facing higher stiffness objects or lower update rate.

In the 3-degree of freedom (DoF) haptic rendering, the force calculation is mainly based on the interaction between haptic interface point and virtual objects. Constraint-based techniques such as the god-object and virtual proxy approaches are commonly used in the surface contact as well as internal property rendering. In addition, ray-based rendering also can be for 3 DoF haptic rendering .

In the 6-DoF haptic rendering, both volumetric approaches and prediction methods for polygonal models have been proposed . One way to calculate the object-based haptic force is using all the surface points of virtual objects to check whether they are inside or outside the objects. The contact points are used to trace the force direction and magnitude, moreover, the sum vector is the force vector of haptic device.

In addition, a voxel-based approach has been proposed for 6-DOF haptic rendering . In simulation of surgical incision training, the voxmap and point shell are built for surgical knife and tissue respectively. The cutting force, friction force and clamping force are calculated from voxmap and point shell by the different force calculation model.

4. Six-DoF Haptic Rendering Algorithm

In the 6-DoF hapitc rendering algorithm, the computation for haptic rendering at each time frame involves the following steps:

1. Collision detection:

Firstly, the algorithm detects if an intersection has occurred between an object held by the user and the virtual environment.

2. Contact manifold and Penetration depth computation:

The contact manifold refers to the set of all points where the two objects come into contact with each other. If an intersection has occurred, then the algorithm needs to compute the intersection points that form the intersection region and the contact normal direction. A measure or an estimation of penetration depth along the contact normal direction is computed from the intersection region.

3. Restoring forces and torques computation:

A restoring or contact force is often calculated based on penalty methods that require the penetration depth. Given the force and the contact manifold, restoring torques can be easily computed.

4.1 Six-DoF Haptic Rendering Pipeline

For a more specific understanding the steps, we can see an example of six-DoF haptic rendering method, show in Fig. 12. .

In this haptic rendering pipeline, the algorithm computes the displayed force based on the following steps.

  1. Identify the convex pieces of the objects that are inter-penetrating or are closer than a distance tolerance.
  2. Near Contact: For each pair of convex pieces that are disjoint but within a given tolerance, declare a near contact and return the corresponding pair of closest features and their separation distance.
  3. Penetrating Contact: For each overlapping pair of convex pieces, compute its Penetration Depth along with the associated Penetration Depth features.
  4. Cluster all contacts based on their proximity and their PD or distance values.
  5. Compute penalty-based restoring forces at the clustered contacts.

As shown in Fig. 12, this 6-DOF haptic rendering algorithm first checks whether convex pieces of two objects are overlapping or disjoint but within a given tolerance threshold value. At run time, the intersection test is recursively applied to nodes in one Bounding Volume Hierarchy (BVH) against nodes in another BVH. We use an efficient, incremental algorithm for convex polytopes based on Voronoi Marching to perform a collision query on each pair of convex pieces. It computes the separation distance between the given pair. This top-down traversal is applied recursively to both the hierarchies until there is no intersection between the leaf nodes or until there is no leaf node within the tolerance value.

If two convex pieces are disjoint and inside the tolerance, we determine the closest features between them along with their associated distance measures (Near Contact). The features may correspond to a vertex, an edge or a face. If the pieces are overlapping, we identify the intersection regions and estimate the penetration depth (PD) and the associated PD features (Penetrating Contact). Each pair of closest features or PD features corresponds to a single contact. The forces applied at near contacts can be regarded as elastic pre-contact forces. They reduce the amount of inter-penetration between objects, therefore increasing the robustness of our PD estimation.

Once the algorithm identifies all the contacts, we cluster them based on some Euclidean distance δ between them. Octree is used to efficiently cluster the contacts. Then, for each clustered contact, the algorithm computes its position, distance value and direction for the force in terms of a weighted average, where the weights are the distance values associated with each pair of contacts in the cluster. Finally, for each representative (clustered) contact, force and torque are independently computed, and the net force and torque are applied to the haptic probe by summing up the forces and torques of all the representative contacts.

This haptic pipeline basically follows the steps of haptic rendering. One improvement is that the contact step divided into two situations (near contact and penetrating contact) to improve the efficient and speed of calculation.

4.2 Collision detection:

Collision detection has been well developed in computational geometry and computer graphics, and the general approaches have been introduced in section 3.1.3. In this part, some hierarchical collision detection approaches will be presented specifically, since they are usually used in haptic rendering.

The algorithms for collision detection of convex polyhedra are not effective when they are applied to nonconvex polyhedra. However, by using hierarchical culling or spatial portioning techniques that significantly reduce the number of primitive-level tests the speed of collision detection between general models can be greatly increased. Over the last decade, bounding volume hierarchies (BVH) have proven to be successful in accelerating collision detection for dynamic scene of rigid bodies. Gregory's work has shown the extensive description and analysis of BVHs for collision detection .

4.2.1 Bounding Volume Hierarchies (BVH)

Assuming that an object is described by a set of triangles, a BVH is a tree of BVs, where each BV bounds a cluster of triangles. The clusters bounded by the children of BVs constitute a partition of cluster. Often, the leaf BVs bound only one triangle. A BVH may be created in a top-down manner, by successive partitioning of clusters, or in a bottom-up manner, by using merging operations.

To understand the process of collision detection using BVHs, two objects are queried by recursively traversing their BVHs in tandem, as shown in Fig.13. Each recursive step tests whether an overlap happens between a pair of BVs and. If anddo not overlap, the recursion branch is terminated. Otherwise, if they overlap, the algorithm is applied recursively to their children. If and are both leaf nodes, the triangles within them are tested directly. This process can be generalized to other types of proximity queries as well.

Fig. 13. Bounding Volume Hierarchy traversal. From left to right, test of BVs in object space, schematic representation of the BVHs, and BVTT showing positive and negative tests. The collision test is performed by traversing the BVHs in tandem, and the pairwise BV tests can be represented using the BVTT.

The test between two BVHs can be described by the bounding volume test tree (BVTT) which is a tree structure that holds the result of the query between two BVs in each node. In situations of temporal coherence, collision tests can be accelerated by generalized front tracking (GFT). GFT caches the front of the BVTT where the result of the queries switches from true to false for initializing the collision query in the next time step.

The overall cost of a collision test is proportional to the number of nodes in the front of the BVTT. When large areas of the two objects are in close proximity, a larger portion of the BVTT front is close to the leaves, and it consists of a larger number of nodes. The size of the front also depends on the resolutions with which the objects are modeled; higher resolutions imply a deeper BVTT. Therefore, the cost of a collision query depends on two key factors: the size of the contact area and the resolutions of the models.

4.2.2 Oriented Bounding Box (OBB) Trees

OBB Trees are considered to be one of the most effective approaches for collision detection between rigid bodies, since they achieve a balance between bounding tightness and cost of the overlap test. For some 6-DoF haptic rendering examples, this method also have been successful applied [34].

The main advantage of OBBs is that it provides more tight fitting properties than simple bounding volumes like Axis-aligned Bounding Box (AABB) and Sphere Bounding Box. Fig. 14. shows the construction of an OBB. Firstly, the vertices of object are represented by a set of points. And the idea is to find a main axis for OBB which make sure its volume is minimized.

In practice, this can be approximated by projecting the points onto the line and maximizing the variance of the distribution of the projected points [21]. The orientation of the OBB is given by the eigenvectors of the covariance matrix of the original set of points.

For polyhedral models with irregular sampling, shown in Fig. 15., it is convenient to firstly compute the convex hull of the model and regularly samples the faces of the convex hull, thus enhancing the robustness of the OBB computation.

Although the OBBs can provide better fitting properties, however, the overlap test between two OBBs is more expensive than for other simpler BVs such as AABBs or spheres.

The overlap test between two OBBs relies on the search of a separating axis. As illustrated in Fig. 16, axis L is a separating axis for two OBBs A and B, if and only if the projections of A and B onto L yield two disjoint intervals. Given the distance between the projections of the centroids of A and B, and half the size of the projected intervals, and , the intervals overlap if and only if

4.3 Contact Manifold and Penetration Depth Computation

The computation of and contact manifold and penetration depth could be considered as an extension of collision detection. For the tradition computer graphics, it mainly focuses on the detection of collision, not cares about how much the collision is. However, for the application of haptic rendering, the penetration depth directly related to the calculation of restored force and torque. For the 3-DoF haptic rendering, the penetration depth can be easily computed by the penalty-based methods. Since the haptic interface point is just represented as a point, and the calculation can be implemented in a high update of 1 KHz. When the problem comes to the 6-DoF haptic rendering, the haptic interface point has become to be a complex object or a tool. If we use the extension of 3-DoF haptic rendering method, we need to calculate all the vertices distances between two objects. This is unacceptable for haptic rendering of large models. So we need to develop a complete new method of estimating penetration depth for 6-DoF haptic rendering. In this section, at first, we need to understand some important definitions. Then a new method of DEEP algorithm by Kim et al[35], will be introduced.

The penetration depth is computed by extending the closest-feature algorithm. It is defined as the smallest distance that one of the primitives has to move so that the two primitives are just touching and not penetrating along the contact normal direction.

4.3.1 Some Definitions

Minkowski Sum:

The Minkowski Sum, is defined as a set of pairwise sums of vectors from A and B, . Fix a point in the plane. Point is called the origin. In other words, to find Minkowski's sum of two sets one must consider the totality of all possible sums of a point from one set and a point from the other. If the origin is translated from point to, the sum of two sets is translated by the same distance, but in the opposite direction.

Similarly, the Configuration Space Obstacle (CSO), is defined as. It is also called as Minkowski difference of two objects. The Minkowski difference is a special case of the Minkowski Sum of two convex shapes. The Minkowski difference of two shapes is one shape "grown" by the shape of another. If you imagine two circles of equal radii (positioned at A and B), the Minkowski difference would be one circle of twice the radius, positioned A-B.

It's formed by taking every point on the surface of one object, finding the most opposite point of the other and subtracting them to form a third point which lies on the surface of the Minkowski difference. It's easier to imagine this in the case of circles; for any direction D, there is a point most opposite it in the direction - D. If you do this for two circles of equal radius, you end up with a third circle of twice the radius, which is the Minkowski difference of the two shapes.

The key thing to note is that when A and B collide, C contains the origin of what is referred to as Configuration Space. Configuration space is the space that the Minkowski difference resides within the set of vector differences of two objects.

Take an example, shown in Fig. 17, top left indicates separation distance δ between objects A and B in object space. Top right figure shows the separation distance in Configuration Space which is shown as the distance from the origin to the boundary of the Minkowski difference. Bottom left shows the penetration depth δ between objects A and B in object space. Bottom right shows the penetration depth in object space, shown as the distance from the origin to the boundary of the Minkowski difference. Note that in the penetrating case, the origin of the Configuration Space lies inside the Minkowski difference.

Gauss Map:

The Gauss Map is a mapping from object space to the surface of a unit sphere in 3D space. In this mapping, a face is mapped to a point; an edge mapped to an arc on the sphere, and a vertex is mapped to a convex region. Thus, this mapping represents the mapping of features from the object space to the normal space, see in Fig.18.

In Fig. 18, assume that are the incident edges of a vertex, and are the faces the share the edges; the faces are also associated with its outward face normal, respectively. In (b), the Gauss Map for these features maps the faces to the point on a unit sphere, respectively, the edges to great arcs, and the vertex to a convex region. And if the Gauss Map of two objects A and B and their overlay is computed, one can reconstruct from the overlay.

4.3.2 DEEP Algorithm for Penetration Depth Estimation

The Dual-Space Expansion for Convex Polyhedra (DEEP) algorithm aims at computing the penetration depth between two convex polytopes. It exploits the dentition of penetration depth as the minimum distance from the origin to the boundary of the Minkowski sum (or Configuration Space Obstacle, CSO) of two objects A and −B.

DEEP exploits the fact that the Minkowski sum of two convex polyhedra can be computed from the arrangement of their Gauss maps. A Gauss map relates a feature by its surface normal in 3D to a location on the surface of a unit sphere, as shown by the example in Fig. 19. A face of a polyhedron maps to a point in the Gauss map, an edge maps to a great arc, and a vertex produces a region bounded by great arcs.

In practice, DEEP implicitly computes the surface of the Minkowski sum by constructing local Gauss maps of the objects. Fig. 19 provides an overview of the computation of the Minkowski sum from the Gauss maps.

The Gauss maps of the convex polytopes to be tested are computed as a preprocess. During runtime, by using a pair of initialization features, the Gauss map of one object is transformed to the local reference system of the other object. The Gauss maps are projected onto a plane, and the arrangement is implicitly computed. The features that realize the penetration depth also define a penetration direction, which corresponds to a certain direction in the intersected Gauss maps.

The problem of finding the penetration features reduces to checking distances between vertex-face or edge-edge features that overlap in the Gauss map, as these are the features defining the boundary of the Minkowski sum.

Given a pair of features that define a vertex of the Minkowski sum, DEEP proceeds by walking to neighboring feature pairs that minimize the penetration depth, until a local minimum is reached. The distance from the origin to the boundary of the Minkowski sum of two convex polytopes may present several local minima, but in practice DEEP reaches the extreme minimum if appropriate initialization features are given. In situations with high motion coherence, the penetration features from the previous frame are usually good initialization features.

Some evaluation tests have been performed on DEEP, and the result shows that the query time in situations with high motion coherence is almost constant and the performance is better than previous algorithms, both in terms of query time and variations in the penetration normals.

4.4 Contact Forces and Torques

Given the contact manifold and estimated penetration depth between the probe and the virtual environment, we can compute the contact forces and torques for 6-DOF haptic rendering. In this section, we describe the basic formulation of 6-DOF force display and the use of predictive techniques to avoid penetration as much as possible.

The computation of contact forces based on the penalty methods. Using Hooke's law, generate a spring force that is proportional to the penetration depth:

where is the spring stiffness constant and is the depth of penetration. The computed restoring force vector is applied to the contact manifolds along the contact normal direction, then generating a sense of touch.

Restoring torque is generated by:

Where is the contact force vector that applied at the contact points and is the radius vector from the center of mass to.

4.4.1 Stable Force and Torque Computation

This force model is chosen to be computationally feasible in real-time and reproduce physically realistic forces as well. Essentially, this force model follows Hooke's law, where forces are proportional to displacements. It is proved to be feasible to improve the stability of haptic device by adding a damping constant in the formula of force calculation. Even though the force computation method based on the formulation of the linear complementarily problem might reproduce higher-fidelity and more realistic contact forces, the solution turns out to be prohibitively slow for haptic rendering. In addition, Hooke's law can also approximate inter-object contact that results in very small deformation. The forces on Object1 and Object2 are computed as:

Where k and kv are stiffness and damping constants, respectively.”

The relative velocity v is computed by subtracting the velocities of both objects, v1 and v2 at the point p and projecting the result onto the contact direction:

Moreover, torque is computed as a cross product of the vector from the center of mass of each object to the contact point p and the force F:

The forces and torque are summed up for each object to compute the net force and torque. If the force and torque to be applied to the probe and to the user exceed the maximum values of the haptic device, they are clamped to the maximum values, preserving the direction.”

4.4.2 Predictive Collision Response

Since the computation of penetration depth is more expensive than the computation of distance between objects. To minimize the frequency penetration depth computation, we can conceptually “growing” the actual surface along its surface normal direction by some small amount. When the distance can declare a collision:

If an actual penetration of occurs, then we modify the contact force using the same principle by setting

This formulation reduces the need for computing penetration depth, which is relatively more expensive than computing the separation distance.

In this equation, is the current velocity and ∆t is the haptic force update frequency.

4.4.3 Force and Torque Interpolation

“It is possible that the magnitude of contact forces can vary, thereby creating sharp discontinuities between successive frames.

So, adding interpolation between two different force normals to achieve smooth force effects.

Let be the force displayed at the previous frame and F1 be the force generated during the current frame. Let us assume that F1>F0. Let Fmax be the maximum amount of force difference allowed between successive updates. We can use the following formulation to display the restoring force F1.”

if (F1-F0 )>2 Fmax

then F1=F0+Fmax

else if F1-F0>Fmax

then F1=(F0+F1)/2

display F1

4.5 Experiment and Results

These algorithms described before are implemented and inegrated with force feedback hardware to demonstrate the results of Object-based haptic rendering framework. The experiments are performed on a Windows 2000 PC with dual 1-GHz Pentium III CPUs and 500 MB memory with a 6-DOF PHANToM Premium 1.5 haptic device.

Fig. 20 illustrate typical benchmark scenario using our haptic simulation framework. In Fig. 20-(a) ∼ (c), a non-convex object, a spoon, is touching the surface of another non-convex object, a cup. In the particular configuration shown in Fig. 20-(b), the contacts returned by the collision detection module are clustered in four groups. As a result, one contact (P) due to penetration and three other contacts that are within the tolerance threshold (D). The arrows in the figure denote the direction of the resulting restoring forces, and their sizes denote the amount of the forces.

Compare with the previous approaches on 6-DOF (Object-based) haptic rendering, this algorithm offers improvement in several aspects. It is able to handle penetration computations more reliably and accurately as compared to earlier approaches. In previous algorithms, the penetration depth is roughly estimated along the direction of motion using the previous closest feature pairs, whereas this algorithm computes a locally optimal penetration value, which often turns out to be the exact penetration depth in most scenarios. This ensures more stable and realistic force computation. In addition, this method avoids artifacts such as force discontinuity arising from discretization problems.

5. Applications and Future Directions

Haptic research and development has focused on designing and manipulating prototypes of different characteristics or objects used in virtual environments. Applications of this technology have been spreading rapidly from devices applied to graphical user interfaces, games, multimedia publishing, scientific discovery and visualization, arts and creation, editing sound and images, the vehicle industry, engineering, manufacturing, teleoperations, education and training, the military domain, as well as medical simulation and rehabilitation.

5.1 Applications of haptics

The advantage of haptic device is that it proves a more freedom 3D operation than the traditional keyboard, mouse and other computer input devices. In addition, the haptic device is bidirectional peripheral equipment that can both input the position data to computer and output the force feedback to the user. Therefore, lots of applications will be possible with the development of haptic technology, and some are show in Table 1.

Table. 1. Application Areas and Examples

Application areas



Training, surgical simulation

Risky and specialized areas

Astronauts, mechanics

Education about complex objects


Creative 3D work

Modeling, product design

Interaction in 3D and VR environments

3D games

  1. Medicine: surgical simulators for medical training; manipulating micro and macro robots for minimally invasive surgery; remote diagnosis for telemedicine; aids for the disabled such as haptic interfaces for the blind.
  2. Entertainment: video games and simulators that enable the user to feel and manipulate virtual solids, fluids, tools, and avatars.
  3. Education: giving students the feel of phenomena at nano, macro, or astronomical scales; “what if” scenarios for non-terrestrial physics; experiencing complex data sets.
  4. Industry: integration of haptics into CAD systems such that a designer can freely manipulate the mechanical components of an assembly in an immersive environment.
  5. Graphic Arts: virtual art exhibits, concert rooms, and museums in which the user can login remotely to play the musical instruments, and to touch and feel the haptic attributes of the displays; individual or co-operative virtual sculpturing across the internet.

5.1.1 Haptics in Data Visualization

Data visualization uses animations or interactive graphics to analyze or solve a problem. Haptic applications for data visualization are classified into two categories: applications for scientific data visualization, and those for visually impaired humans. Incorporating haptics into scientific data visualization allows users to form a high-level view of their data more quickly and accurately. As an example of scientific data visualization, a problem-solving environment for scientific computing called SCIRun has been developed in. The haptic/graphic display is used to display flow and vector fields such as fluid flow models for airplane wings.

Another application is the incorporation of haptics into biomolecular simulation. For instance, a system called Interactive Molecular Dynamics— allows the manipulation of molecules in a molecular dynamic simulation with real-time force feedback and a graphical display. Finally, at the University of North Carolina, haptic devices have been used for haptic rendering of high-dimensional scientific datasets, including three dimensional (3-D) force fields and tetrahedralized human head volume datasets .

Haptic-based technology allows enhance molecular visualization systems and molecular docking systems with force feedback in such way that the user could “feel” force field of molecule and molecular interaction. There are haptic-based systems that enable users to feel the electrostatic force between a probe molecule and the explored biomolecule. Lai-Yuen and Lee [38] developed computer-aided design system for molecular docking and nanoscale assembly. The authors use their own lab-built 5DOF haptic device. The paper discusses the docking of ligand to protein. During this docking process, the force feedback is calculated according to van der Waals forces. Stocks and Hayward developed a haptic system HaptiMol ISAS [39]. In another approach for the rigid body molecular docking, proposed by Subasi and Basdogan [40], the user inserts a rigid ligand molecule into the cavities of a protein molecule to search for binding cavity. Similarly to the cube approach, an Active Haptic Workspace (AHW) was implemented for the efficient haptic-based exploration of large protein-protein docking in high resolution.

In this part, the system of biomolecular docking HMolDock (Haptic-based Molecular Docking), shown in Fig. 21 will be introduced to show how the haptic device used in molecular docking applications.

The format of molecular structure file from the Protein Data Bank is chosen as an input source. Two molecules or one molecule and the probe are visualized on the screen. The user can assign a haptic mouse to the probe or to one of the molecules and move the probe/molecule towards/around to another molecule. An interaction force is calculated at each position, and the resulting attraction/repulsion force is felt by the user through the haptic device. The force direction and its magnitude are visualized as a vector. Thus, a probe/molecule can be selected by the haptic mouse and moved around to let the user ‘feel' the force changing.

5.1.2 Haptics in Medical Simulation

The medical area has been an abundant source of haptic development. Introducing haptic exploration as the medium of training has revolutionized many surgical procedures over the last decade. Surgeons used to rely more on the feeling of net forces resulting from tool-tissue interactions and needed surgical experience to operate successfully on patients. Haptic applications include surgical simulations, telesurgery systems, rehabilitation, and medical training. Haptic-based surgical simulators address many of the issues in surgical training. First, they can generate scenarios of graduated complexity. Second, new and complex procedures can be practiced on a simulator before proceeding to a human or animal. Finally, students can practice on their own schedule and repeat the practice sessions as many times as they want. Surgical simulators have been surveyed in and can be classified according to their simulation complexity as needle-based, minimally invasive surgery, and open surgery.

Haptic applications in rehabilitation involve applying certain forces to the injured or disabled organ (such as the finger, arm, or ankle) to regain its strength and range of motion. Haptic interfaces show clear benefits in imitating a therapist's exercises with the possibilities of position and force control. A lot of research has been performed in the area of haptic applications for rehabilitation and medical training .

5.1.3 Haptics in E-Commerce

As for electronic commerce, or e-commerce, force feedback would allow the consumer to physically interact with a product. Human hands are able to test a product by feeling the warm/cold, soft/hard, smooth/rough, and light/heavy properties of surfaces and textures that compose a product. Consumers usually like to touch certain products (such as bed linens and clothes) in order to try them before they buy.

5.1.4 Haptics in Education

There is a growing interest in the development of haptic interfaces to allow people to access and learn information in virtual-reality environments. A virtual-reality application combined with haptic feedback for geometry education has been recently investigated. The proposed system allows a haptic 3-D representation of a geometry problem, its construction, and the solution. The performance evaluation showed that the system is user friendly and provides a more efficient learning approach.

A system for constructing a haptic model of a mathematical function using the PHANToM haptic device has been introduced and implemented. The program accepts a mathematical function with one or two variables as input and constructs a haptic model made of balsa wood with the trace of the function carved into its surface.

Another application that simulates a catapult has been developed to enable users to interact with the laws of physics by utilizing a force feedback slider (FFS) interface. The FFS is a motorized potentiometer limited to one degree of movement (push/pull along a line) through which the user grabs the slider and moves the handle. It has been shown that force feedback helps users in creating a mental model to understand the laws of physics.

5.1.5 Haptics in Entertainment

Haptic research in the field of home entertainment and computer games has blossomed during the past few years. In general, the game experience has four pillar aspects: physical, mental, social, and emotional. In particular, force feedback technology enhances the physical aspects of the game experience by creating a deeper physical feeling of playing a game, improving the physical skills of the players, and imitating the use of physical artifacts.

Many researchers have introduced complex haptic-based games. For instance, haptic battle pong is an extension of pong with haptic controls using the phantom device. A haptic device is used to position and orient the paddle while force feedback is used to render the contact between the ball and the paddle.

The Haptic Airkanoid is another ball-and-paddle game where a player hits a ball against a brick wall and feels the rebound of the impact. It has been shown that playing the haptic version is more fun even though the vibration feedback is not realistic.

There are many games currently take advantage of the haptic effects offered by mainstream haptic device. For example, in a car racing game, players may feel vibrations in their joysticks or steering wheels as they drive over a rough section of road. Or players of an action game may feel a rumble from their mouse as rockets shoot past their heads. While these devices can increase the level of immersion experienced by the user, we feel their use in games is often trivial or poorly planned. Granted, these devices cannot offer the level of interaction which is offered by modern haptic devices, but this is something which we believe will soon change.

Take the example of HaptiCast which is a 3D haptic game which acts as an experimental framework for assessing haptic effects. In HaptiCast , players assume the role of a wizard with an arsenal of haptic-enabled wands which they may use to interact with the game world, see in Fig. 22. The integration of haptic feedback in this first-person shooter style video game uses a “vanilla” 3D game engine.

In this game, the player interacts with the game world using a series of wands. When the player uses a wand, a spell is cast which displays a haptic effect and offers a different way of interacting with the game environment. The force values at the haptic device are calculated and displayed each time the physics engine is updated.

5.1.6 Haptics in Arts and Designs

Haptic communication opens new opportunities for virtual sculpting and modeling, painting, and museums. Sculpting and modeling arts are innately tactile; therefore the introduction of touch in virtual sculpting is explicitly important to the language inherent in sculptural forms. As for painting, haptics has a clear merit in recreating the “sight, touch, action, and feel” of the artistic process. There is a novel painting system with an intuitive haptic interface, which serves as an expressive vehicle for interactively creating painterly works. The force feedback enhances the sense of realism and provides tactile cues that enable the user to better manipulate the paint brush. The haptic stylus serves as a physical metaphor for the virtual paint brush. The bristles of the brush are modeled with a spring-mass particle system skeleton and a subdivision surface. The brush deforms as expected upon colliding with the canvas. The resulting system provides the user with an artistic setting, which is conceptually equivalent to a real-world painting environment. Several users have tested DAB and were able to create original art work within minutes, see in Fig. 23.

Furthermore, haptic modality is a significant asset for virtual art exhibitions as it allows an appreciation of 3-D art pieces without jeopardizing the conservation standards. Audio Applications Adding audio makes haptic-based systems closer to the real simulation. Users can interact with more sensory channels and be immersed in simulations that are more realistic. Modeling the sound produced when objects collide is the objective of a haptic interface that intends to provide more realism in feeling a fabric's roughness, friction, and softness.

5.2 Future Directions

In the last decade, there are great developments for computer graphics. Since many problems have been investigated to for interactive graphical rendering, the virtual environment can present synthetic images of objects with complex shapes, rich lighting effects and highly textured surfaces. For the development of Virtual Reality, more and more researches are aware of the important role of Haptics.

In the 1990s, most of haptic rendering is based on the point representation of haptic interface point (3-DoF haptic rendering). Recent researches are focus on the object based haptic rendering (6-DoF haptic rendering) for interactive manipulation of fairly complex scene.

Moreover, there are some challenges for 6-DoF haptic rendering. These problems come from the design of force feedback device, haptic rendering deformable models, stability and time delay of collaborative haptic and new applications for 6-DoF haptic rendering.

Most of the previous force feedback device has been rather heavy and can be inconvenient to wear, and the price is so high that only used for medical and research. Now, some new haptic device has been commercialized for game and entertainment area. However, it still needs some time for haptic devices to become to be standard computer peripheral equipment.

For a complete 6-DoF haptic rendering algorithm, the deformations of objects must be considered. However, most of present algorithms are designed for rigid objects, since they can simplify the collision detection and dynamic simulation. Barbic and James has proved that the feasibility of six-DoF haptic rendering of reduced deformable models with Voxmap-PointShell (VPS) method . For rendering more realistic deformable models, new haptic rendering algorithm and models are needed.

Collaborative haptic communication is also a new direction for haptics, for it allows more than two users to use several haptic devices to manipulate objects in the same space. Some experiments have proved the effective of collaborative haptic. However, since the high update of haptic device, the requirement for network and data transformation is extremely high. To investigate all these challenges, new simple and efficient algorithms are needed for collaborative haptic rendering.

Please be aware that the free essay that you were just reading was not written by us. This essay, and all of the others available to view on the website, were provided to us by students in exchange for services that we offer. This relationship helps our students to get an even better deal while also contributing to the biggest free essay resource in the UK!