Abstract
An automated approach is proposed for a microassembly task, which is to insert a 10 µm diameter glass tube into a 12 µm diameter hole in a silicon substrate, and bond them together with UV curable adhesive. Two three-degree-of-freedom micromanipulators are used to move the glass tube and the dispensing needle, respectively. Visual feedback is provided by an optical microscope. The angle of the microscope axis is precisely calibrated using an autofocus strategy. Robust image segmentation method and feature extraction algorithm are developed to obtain the features of the hole, the glass tube and the dispensing needle. Visual servo control is employed to achieve accurate aligning for the tube and the hole. Automated adhesive dispensing is used to bond the glass tube and the silicon substrate together after the insertion. Online monitoring ensures that the diameter of the adhesive spot is within a desired range. Experimental results demonstrate the effectiveness of the proposed strategy.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Conventional micro-electro-mechanical systems (MEMS) fabrication technologie such as bulk micromachining, surface micromachining, lithographie, galvanoformung, abformung (LIGA), and deep reactive ion etching, are widely used in producing various kinds of simple function MEMS. However, as MEMS are becoming more and more functional, these technologies are insufficient in producing MEMS with multiple functionalities. Integration of MEMS devices fabricated with different technologies and different materials is required to build a multi-functional hybrid MEMS. Microassembly is the process of manipulating, moving and bonding micro components together to form functional hybrid MEMS, which is gaining much attention because of its essential role in constructing hybrid MEMS[1–4]. However, manual microassembly is laborious, time consuming, and of low success rate. So the automation of microassembly is indispensable for the eventual success of hybrid MEMS.
To realize the integration of the micro components into a complete hybrid MEMS, reliable micro bonding technology is of great importance. There are various kinds of micro bonding technologies such as micro fasteners[5], micro adhesive dispensing[6–8], soldering[9], acoustic bonding[10], etc. Micro fastener has been proven to be an effective way for bonding different micro devices together, yet it has the disadvantages of complicating the micro parts structure and compromising the strength of the micro parts. Micro soldering and diffusion are mainly used in micro circuit assembly, and they are not suitable for hybrid MEMS assembly involving soft materials. Micro adhesive dispensing is of high potential, and is capable of bonding firmly almost any two kinds of materials with the right type of adhesive.
Without regarding the issues on the adhesive property, there are two main challenges facing micro adhesive dispensing. Firstly, the accurate aligning of the parts to be bonded is very critical. Because, unlike the micro fastener approach which is self-aligned by their mechanical structures after the bonding, the misalignment of the micro parts may aggravate the internal stress in the bonded structure after curing of the adhesive[11], which may eventually affect the stability of the whole hybrid MEMS. Secondly, accurate positioning of the dispensed adhesive spot and precise amount control of the dispensed adhesive are of great importance for the bonding task. Too little adhesive can not ensure the strength of the bonding, while too much adhesive may affect the uniformity of the structure or even damage the surrounding environment.
In this paper, we present an automated approach to perform the insertion of a 10 µm diameter glass tube into a 12 µm diameter hole in a silicon substrate and bond them together with UV curable adhesive. The insertion process of the glass tube into the hole on the silicon substrate is split into 2 steps: 1) The glass tube tip is aligned with the hole, using a visual servo control strategy. 2) The glass tube is inserted into the hole, in which an open-loop control method is employed because the glass tube image will be blended with the hole image. Vision-based online monitoring is used to make sure that extraction amount of adhesive is dispensed to the juncture of the two micro parts.
The rest of the paper is organized as follows: Section 2 gives a brief introduction of the system setup. In Section 3, the calibration of the microscope axis angle is introduced. Section 4 addresses the details of the automated microassembly process, including the insertion process of glass tube and the adhesive dispensing process. Section 5 presents the experimental results. Finally, the paper is concluded in Section 6.
2 System setup
The automated microassembly system is composed of an insertion unit, a micro dispensing unit, a vision unit and a host computer to control the whole process. The system setup is shown in Fig. 1.
The insertion unit consists of a 6-degrees-of-freedom (DOF) manual platform, on which the silicon substrate with a hole at its center is placed, and a 3-DOF manipulator, on which the glass tube is mounted through a vertically curved metal tube. The 6-DOF manual platform is pre-adjusted such that the silicon substrate placed on it can face upwards.
The dispensing unit is formed by a microinjector and a 3-DOF manipulator. The microinjector is used to add pressure to the dispensing needle to dispense the adhesive contained in the needle. The 3-DOF manipulator controls the motion of the dispensing needle, and it is the same type as the one used in the insertion unit.
The vision unit consists of a microscope camera and a moving platform with a support mechanism. The microscope camera is formed with a microscope lens and a CCD camera. The moving platform is employed to move the microscope camera in order to adjust its distance to the object to be viewed. It has one translation DOF. It is necessary for the microscope camera to be mounted to point to the hole in an inclined direction in order to view the hole and the tube tip and the dispensing needle at the same time. The insertion unit and the vision unit are mounted on the vibration isolation platform. The computer is used to capture images via the microscope camera and to control the movements of the insertion unit and the moving platform.
3 Microscope axis calibration
To make enough space for the microassembly operation, the microscope is installed about 30° off the horizontal plane. The angle of the microscope axis is very important for the insertion process of the glass tube and the hole, because the final insertion process will be carried out in an open-loop manner. Therefore, it is necessary to calibrate the angle of the microscope axis, instead of taking a mechanically constrained theoretical value. A simple but effective way to calibrate the angle of the microscope axis is developed here. Fig. 2 illustrates the calibration method.
At position 1, the glass tube tip is on the focal plane of the microscope. Then it is moved vertically downward to position 2, where the glass tube tip is out of the focal plane of the microscope. Refocus the glass tube tip horizontally to position 3, and record the horizontal distance between the focus position between position 1 and position 3 to be h d . The distance from position 1 to position 2 is a known distance v d. The angle of the microscope axis can be calculated as
4 Automated microassembly
The microassembly task is to insert a 10 µm diameter glass tube into a 12 µm diameter hole in a silicon substrate, and bond them together with adhesive. The flow chart of the general procedure is shown in Fig. 3.
Before the automated process begin, the glass tube, the hole and the dispensing needle must be manually brought into the field of view of the 120× microscope. This manual preparation is conducted with the help of a stereo microscope and a 12× microscope. Calibration of the 120× microscope is conducted in advance.
4.1 Image segmentation
Image segmentation is very critical in image processing[12,13]. The image segmentation method with three steps used here is illustrated in Fig. 4.
In the first step, a manually selected gray threshold is used for binarization of the whole image. Because the hole image is much darker than the rest of the image, the gray threshold can be in a certain range to guarantee a good binarization result. Also, with the priori knowledge about the hole position, that is, the hole sits at the bottom right area of the image. The right and bottom edges of the hole area can be easily decided by scanning from the bottom and the right. After the right and bottom edges are decided, the scanning area will be narrowed, and the top and left edges will be decided by scanning in the narrowed area.
In the second step, the glass tube is moved 2 μmto the right, and the needle is moved 2 μm downward. Then, background subtraction is applied. Binarization is also applied to the image after background subtraction, with a threshold chosen in advance. To decide the right edge of the needle area and the right and left edges of the glass tube area, the area above the hole is projected vertically to form a curve. From the projection curve, the three edges can be decided.
In the third step, after the needle and glass tube area are separated, the two areas are projected horizontally to form their curves. The top and bottom edges of the needle area, and the bottom edge of the glass tube can be decided from the projection curves.
4.2 Feature extraction
Since the 3—DOF manipulator has a high resolution, the key factor for the success of the automated microassembly lies on feature extraction of the micro components in the image.
The hole is circular, but because of the tilting angle of the microscope, its image is approximately an ellipse. Therefore, after the hole area is decided, edge points of the hole can be achieved by scanning the binarized hole area. Then an ellipse is fitted to determine the center of the hole, which is the image feature of the hole.
The region of interest (ROI) of the tube is firstly acquired in the image segmentation part. In the following procedure, the tube ROI will be tracked and updated. The image quality of the glass tube is poor because it is transparent and reflective. The tube ROI is binarized to improve the robustness of the feature extraction. It is performed based on the following observation. There are three gray levels in the glass tube area, the background part, the dark part of the glass tube resulted from blocking some of the light, and the bright part of the glass tube resulted from reflecting the light. Both the dark and the bright parts are the images of the glass tube, namely, the foreground images, as given in
where g t is a threshold to separate the foreground image from the background image, and its value is experimentally decided. The background gray value g h is obtained from the grayscale histogram of the tube ROI.
The gray value corresponding to the biggest value of the grayscale histogram value is chosen to be the gray value of the background. After binaryzation of the tube ROI, median filter is performed to eliminate the noise. Then edge points are obtained by scanning the binarized ROI. Line fit method is used to find the two edges of the glass tube tip on the image plane, using the random sample consensus (RANSAC)[14,15]. The angle bisector of the two fitted edge lines is chosen to be the center line of the glass tube. The intersection of the center line and the bottom of the tube area is determined, which is the image feature of the tip center of the glass tube.
Feature extraction for the dispensing needle is similar to the glass tube, which is omitted here.
4.3 Aligning and insertion control
The insertion process is split into two steps: the visual servo control aligning step and the open-loop insertion step.
The glass tube tip is firstly aligned with the position at 120 pixels above the hole center using visual servo control:
where (Δu, Δv) are the image errors between the hole center and the glass tubes tip center on the image plane, k x , k y are pixel lengths, which are calibrated in advance, Δx c and Δyc are the position errors on the clear imaging plane.
For convenience to transform the motion increments on the clear imaging plane to the 3D manipulator coordinates, the microscope camera is adjusted such that the x axis of the manipulator coordinates is parallel to the Δx c axis of the camera coordinates. Δx c and Δy c are calculated directly from the feature errors on the image plane, as given in (3). The axis angle of the microscope and the parallel constraint of x and x c axes are used to convert Δx c and Δy c into the 3D manipulator coordinates, as given in
where θ is the rotation angle around x axis. Δxm, Δy m and Δ zm are the position errors in the 3D Cartesian space.
A PI controller is employed to move the glass tube tip to the aligning position.
where Δx a , Δy a and Δz a are the adjusting position increments in the 3D Cartesian space, K p and K i are the parameters of controller.
The first step is finished once the tube tip is moved to the aligning position. Then the second step begins. The tube tip is focused again to make sure that it is right on the clear imaging plane before the final insertion. Then it is relocated and moved downwards by a given distance to insert into the hole. The process of aligning the dispensing needle with the designated position is similar to the glass tube insertion process. The details are omitted here. The aligning process will be shown in the experiment part.
4.4 Adhesive dispensing control
The adhesive will start to leak into the crevice of the hole and spread around the glass tube once the dispensing needle touches the silicon substrate or the glass tube because of the wetting phenomenon. Before the dispensing needle touches the silicon substrate or the glass tube, the adhesive will cling to the tip of the needle firmly due to strong surface tension when the diameter of the adhesive at the tip of the needle is very small[16,17].
To get a desired adhesive spot, an online “monitor” is designed as a “trigger”. When the diameter of the adhesive spot reaches 35 µm on the online “monitor”, it will immediately trigger a motion to withdraw the dispensing needle and end the adhesive dispensing process.
The online “monitor” is realized using background subtraction method. The image captured by the CCD right after the needle contacts the bottom part of the glass tube is used as the background image. After that, online images are used to subtract the background image, and binarization is performed to achieve the growing adhesive spot. To improve the speed of the image processing for the adhesive spot, the ROI is set to be a long rectangle, which contains 50 pixels above and 50 pixels below the hole center.
5 Experiments and results
5.1 Experiment system
An experiment system was established according to the scheme given in Section 2, as shown in Fig. 5. In the experiment system, there are three microscope cameras including an 120× magnification microscope camera (Navitar 12× combined with Mitutoyo M84010711 10×) and a stereo microscope (Zeiss Stemi 2000-C) and a 12× magnification microscope camera (Navitar 12×). But only the 120× magnification microscope camera is used to provide visual feedback in the automated microassembly task. The 12× magnification microscope camera and the stereo microscope camera are used for the initiation for the automated process, which is to bring the glass tube and the dispensing needle to the vicinity of the hole in the silicon substrate. The two 3-DOF manipulators are both Sutter MP-285, and the manipulator had a travel of 2.54 cm with a resolution of 0.04 µm.
The glass tube tip is about 10 µm in diameter. The hole is 12 µm in diameter. And the outer diameter of the dispensing needle is about 1.2 µm.
5.2 Calibration results
Experiments were conducted to complete the calibrations of the system. Fig. 6 shows the calibration results of the microscope axis in 20 experiments. The calibrated angle of the microscope axis off the horizontal plane was 29.8°.
5.3 Image segmentation and feature extraction
The image segmentation method as described in Section 4.2 was employed to segment the objects. Fig. 7 is the image area acquired after median filtration and morphological opening operation of the binarized subtraction image. The threshold used for binarization was 15.
The projection curves used for determining the edges of the glass tube area and the needle area are shown in Fig. 8. From Fig. 8 (a), with the vertical projection of the whole area above the hole area, the right edge of the needle area and left and right edges of the glass tube area were easily and robustly determined. Fig. 8 (b) is the horizontal projection of the needle area separated from Fig. 8 (a). From this curve, the top and bottom edges of the needle area were determined. Fig. 8 (c) is the horizontal projection curve of the glass tube area, from which the bottom edge of the glass tube area was determined.
The final image segmentation result is shown in Fig. 9. It can be found that all the three objects were well segmented.
5.4 Automated aligning and insertion
In the insertion experiment, threshold gt for binarization of the glass tube tip ROI was set to be 15.
The tracking results and the feature extraction results of the tube tip with the proposed methods were shown in Figs. 10 and 11. It can be found that the tracking area was adequate and the extracted feature was accurate.
The proportion coefficient Kp and the integral coefficient Ki in the PI control law were determined experimentally. They were 0.1 and 0.5, respectively. In the visual servo control step, the moving trajectory of the glass tube tip on the image plane was shown in Fig. 12. After aligning the glass tube tip with the designated aligning position which was 120 pixels above the hole area, the glass tube tip was refocused and relocated to eliminate the accumulated error introduced by the visual servo process. The trajectory of the aligning and insertion in the manipulator coordinates was shown in Fig. 13. Some pictures in the process were given in Fig. 14.
Fig. 14 (a) shows the initial status of the hole, the glass tube and the dispensing needle on the image plane. Fig. 14 (b) shows the results of autofocus and the feature extraction of the glass tube tip. Fig. 14 (c) is the final state when the visual servo control of aligning the glass tube tip center with the designated position above the hole center. Fig. 14 (d) is the final result of the insertion. In the experiments, the tube tip was moved to the hole steadily and inserted into the hole accurately.
5.5 Automated adhesive dispensing process
The needle was aligned with the dispensing position using the same control strategy with the glass tube aligning. Fig. 15 shows the dispensing needle aligning process and the adhesive dispensing process. The finished microassembly task is shown in the last image in Fig. 15. In the experiments, the adhesive dispensing was successfully conducted and the adhesive quantity was also well controlled.
6 Conclusions
In this paper, an effective strategy is proposed to realize the automated microassembly task involving micro adhesive bonding, and some simple but effective methods are presented. Small depth of view field is utilized to calibrate the microscope axis angle using the autofocus method. Robust image segmentation is achieved using background subtraction and projection curves. Feature extraction of the glass tube and the dispensing needle is realized using RANSAC line fit method. And an online “monitor” is designed to ensure precise size of the adhesive spot. Although the methods are put forward to solve the specific problems in our microassembly task, they can easily be adopted to various kinds of microassembly tasks. Experiments have verified the proposed strategy for realizing the automation of the microassembly task.
References
A. N. Das, P. Zhang, W. H. Lee, D. Popa, H. Stephanou. µ3: Multiscale, deterministic micro-nano assembly system for construction of on-wafer microrobots. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, IEEE, Roma, Italy, pp. 461–466, 2007.
N. Dechev, W. L. Cleghorn, J. K. Mills. Microassembly of 3D microstructures using a compliant, passive microgripper. Journal of Microelectromechanical Systems, vol. 13, no. 2, pp. 176–189, 2004.
T. Kasaya, H. Miyazaki, S. Saito, T. Sato. Micro object handling under SEM by vision-based automatic control. In Proceedings of the 1999 IEEE International Conference on Robotics and Automation, IEEE, Detroit, USA, pp. 2189–2196, 1999.
L. G. Chen, L. N. Sun, W. B. Rong, X. Q. Bian. Hybrid control of vision and force for MEMS assembly system. In Proceedings of the 2004 IEEE International Conference on Robotics and Biomimetics, IEEE, Shenyang, China, pp. 136–141, 2004.
L. D. Wang, L. Ren, J. K. Mills, W. L. Cleghorn. Automated 3-D micrograsping tasks performed by vision-based control. IEEE Transactions on Automation Science and Engineering, vol. 7, no. 3, pp. 417–426, 2010.
S. Böhm, K. Dilger, J. Hesselbach, J. Wrege. S. Rathmann, W. Ma, E. Stammen, G. Hemken. Micro bonding with non-viscous adhesives. Microsystem Technologies, vol. 12, pp. 676–679, 2010.
D. Andrijasevic, K. Malecki, I. Gioroudi, W. Smetana, W. Brenner. Low temperature non-viscous adhesive bonding in MEMS. In Proceedings of the 29th International Spring Seminar on Electronics Technology, IEEE, St. Marienthal, Germany, pp. 44–48, 2006.
X. J. Wang, X. D. Wang, G. J. Yao, C. Liu, L. D. Wang. Automatic micro-bonding Technology of capillaries with adhesives. In Proceedings of the 6th International Conference on Electronic Packaging Technology, IEEE, Shenzhen, China, pp. 1–5, 2005.
K. Tsuchiya, A. Murakami, G. Fortmann, M. Nkao, Y. Hatamura. Micro assembly and micro bonding in nano manufacturing world. SPIE Proceedings, vol. 3834, pp. 132–140, 1999.
Y. B. Sun, Y. Luo, Y. Q. Feng, X, D. Wang. Adaptive pressure ultrasonic precise bonding method for polymer micro joint. In Proceedings of the 5th International Conference on Nano/Micro Engineered and Molecular Systems, IEEE, Xiamen, China, pp. 459–474, 2010.
F. Sarvar, D. A. Hutt, D. C. Whalley. Application of adhesives in MEMS and MOEMS assembly: A review. In Proceedings of the 2002 International Conference on Polymers and Adhesives in Microelectronics and Photonics, IEEE, Zalaegerszeg, Hungary, pp. 22–28, 2002.
Y. Gat. A branch-and-bound technique for nano-structure image segmentation. In Proceedings of the 2003 Conference on Computer Vision and Pattern Recognition Workshop, IEEE, Wisconsin, USA, pp. 19, 2003.
R. Marfil, L. Molina-Tanco. A. Bandera, J. A. Rodríguez, F. Sandoval. Pyramid segmentation algorithms revisited. Pattern Recognition, vol. 39, no. 8, pp. 1430–1451, 2006.
J. Zhang, Z. T. Zhang, D. Xu, W. S. Zhang. Aligning micro-gripper to ring object in high precision with microscope vision. In Proceedings of 2011 IEEE International Conference on Computer Science and Automation Engineering, IEEE, Shanghai, China, pp. 123–127, 2011.
M. A. Fischler, R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
X. B. Chen, M. G. Li, N. Cao. Modeling of the fluid volume transferred in contact dispensing processes. IEEE Transactions on Electronics Packaging Manufacturing, vol. 32, no. 3, pp. 133–137, 2009.
X. B. Chen, H. Ke. Effects of fluid properties on dispensing processes for electronics packaging. IEEE Transactions on Electronics Packaging Manufacturing, vol. 29, no. 2, pp. 75–82, 2006.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work is supported by National Natural Science Foundation of China under (No. 61227804 and 61105036).
Fu-dong Li received his B. Sc. and M. Sc. degrees in mechanical design and automation, and mechatronics engineering from Shandong University of Science and Technology, China in 2008 and 2011, respectively. He is currently working toward his Ph.D. degree in control science and engineering at Institute of Automation, Chinese Academy of Sciences (IACAS).
His current research interests include computer vision and automation.
De Xu received his B. Sc. and M. Sc. degrees from Shandong University of Technology, China in 1985 and 1990, respectively, and Ph. D. degree from Zhejiang University, China in 2001, all in control science and engineering. He joined the Institute of Automation, Chinese Academy of Sciences (IACAS), China in 2001, where he is currently professor in the Research Center of Precision Sensing and Control.
His research interests include robotics and automation, particularly control of robots, such as visual control and intelligent control.
Zheng-tao Zhang received his B. Sc. degree from China University of Petroleum, China in 2004, and M. Sc. degree in control science and engineering from Beijing Institute of Technology, China in 2007. He is an associate professor in Institute of Automation, Chinese Academy of Sciences, China.
His research interests include visual measurement, micro-assembly and automation.
Ya-li Shi recieved her B. Sc. degree from Changchun University of Science and Technology, China in 2004, and Ph. D. degree from Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, China in 2009. She is currently an associate professor at the Research Center of Precision Sensing and Control, Institute of Automation, Chinese Academy of Sciences, China.
Her research interests include fine mechanics and micro assembly.
Rights and permissions
About this article
Cite this article
Li, Fd., Xu, D., Zhang, Zt. et al. Realization of an Automated Microassembly Task Involving Micro Adhesive Bonding. Int. J. Autom. Comput. 10, 545–551 (2013). https://doi.org/10.1007/s11633-013-0752-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11633-013-0752-7