Title A mobile marker less AR system for maintenance and repairName jime Essay

Title: A mobile marker less AR system for maintenance and repair.Name: jime, most jakeya sultanaID: 16-31984-2Section: FMotivation: We are facing many problems in our real life. Now we live in the era of science and technology.so we are trying to give any problem’s solution by using science and technology. I choose this topic for some reasons. A few months ago my mobile phone is not working. Everything was okay but incoming and outgoing calls are not allowed just because of networking issue.

I go to a mobile repairing shop and they charged me 1000tk to solve this issue. But after some hours they said me my phone is okay and networking system is also okay. They said me to replace my sim card from customer care. And they took 1000tk from me. On that time I was not anything to say. And I waste my 1000tk. Not only I waste my money but also waste my time.

I am a student of computer science and engineering so after a few days on that incident I heard about augmented reality system in my artificial intelligence class. My course teacher showed us a video where an electronic device is repaired by using augmented reality. It was like when we take our mobile phone in front the electrical switch board the mobile phone gives instruction how to solve the problem step by step. Then I thought it will be helpful if we use this concept for repairing mobile phone then we can save our money, time, energy and so on. Then I study on this topic and give interest. Already we have a system which can read words from a book. Actually my interested field is artificial intelligence and I want to take my higher study from this sector. If we can repair our electronic devices by using AR then it will be very helpful for us. The idea is interesting but not fully implemented. There are many works related marker based AR but if we can use marker less AR then it will be efficient for us. By using this we can be self-independent. Literature Review:Introduction: Suppose, any of our electronic devices like TV, fan, light, laptop, coffee maker etc. doesn’t work. Then what will we do? We have to consult with an expert who can repair that. This is time consuming and costly. Actually I think how to save time and money for repairing and maintenance. If there will be an augmented repair system which gives us guidance about repairing then life becomes more easy. The system shows us all the steps how to fix the problems. By using this system we can repair our electronic devices. Current vision-based trackers are based on tracking of markers. The use of markers increases robustness and reduces computational requirements. However, their use can be very complicated, as they require certain maintenance. Direct use of scene features for tracking, therefore, is desirable. To this end, we describe a general system that tracks the position and orientation of a camera observing a scene without any visual markers. Our method is based on a two-stage process. In the first stage, a set of features is learned with the help of an external tracking system while in action. The second stage uses these learned features for camera tracking when the system in the first stage decides that it is possible to do so. The system is very general so that it can employ any available feature tracking and pose estimation system for learning and tracking. Direct use of scene features for tracking instead of the markers is much desirable, especially when certain parts of the workspace do not change in time. For example, a control panel has fixed buttons and knobs that remain the same over its lifetime. The use of these rigid and unchanging features for tracking simplifies the preparation of the scenarios for scene augmentation as well. The developed AR system has been evaluated in numerous tests in a real industrial context and demonstrated robust and stable behavior. Our system is based upon well-known concepts and algorithms and it is our opinion that it is the right mixture of algorithms that led to a successful AR system. The second stage uses these learned features for camera tracking when the system in the first stage decides that it is possible to do so. The system is very general so that it can employ any available feature tracking and pose estimation system for learning and tracking. We experimentally demonstrate the viability of the method in real-life examples.Related work: In order to evaluate the methods, before going through the direct experiment we need to assess them for relative setups. Among two types of motion capture 1) Marker based 2) Marker less, Marker less is more comforting according Ashish Shingade and Archana Ghotkar (2014). Because in Marker less motion capture no character needs to wear suit and camera handling is little easier. From their survey of different motion capture techniques using Kinect camera, it was observed that detecting skeleton joints and tracking is significant problem to use the method. To get depth information of human body for reference Kinect Camera is preferable solution. . P. Gerard and A. Gagalowicz(2000) said In the recent years, many Industrial Augmented Reality (IAR) applications are shifting from video to still images to create a mixed view. Since the use of AR in industrial applications is a promising and at the same time challenging issue, several prototypical systems have already been developed. For this system marker less AR is used because markers may occlude parts of the workspace and must be properly calibrated to the tracked reference frame. In addition, marker based tracking systems need a free line of sight between the camera and the marker which cannot always be guaranteed in repair scenarios where partial occlusions by worker’s hands and tools are common. P.J. Huber. Robust Statistics(1981) discovered that Initialization and 2D feature tracking: In an initial step, a set of salient 2D intensity corners are detected in the rst image of the sequence. These 2D features are then tracked throughout the image sequence by local feature matching with the KLT operator. If feature tracks are lost, new tracks are constantly reinitialized. The new tracks are merged with previous tracks in the 3D stage to avoid drift. 3D feature tracking and pose estimation: From the given2D feature tracks, a SfM approach can be applied toestimatethemetriccameraposeand3Dfeaturepositions simultaneously. Takashi Okuma, Takeshi Kurata, and Katsuhiko Sakau(2004) invented The problem of marker less AR has two parts:I) Tracking: If the threshold value is (‰€20) all features are deleted and the whole procedure is repeated by using the last estimated pose.Ii) Initialization: The phase of initialization relates with the problem of determining a camera’s projection matrix without any constraints to the pose and without any knowledge of the previous poses. The only constraint given the case is that the working with a calibrated camera and thus a known intrinsic matrix K.This problem is solved when a sufciently large set of 2D-3D correspondences can be established. Given the 2D-3D point matches, the pose of the camera is computed using the algorithm by Tsai .An internal calibration is performed for the camera before the training to account for radial distortion up to 6th degree. A.J. Davison(2003) discovered that An online AR system that allows robust 3D camera tracking in complex and uncooperative scenes where parts of the scene may move independently. It is based on the SfM approach from computer vision. The 3D tracking is based on robust camera pose estimation using structure from motion algorithms that are optimized for real time performance. These algorithms can handle measurement outliers from the 2D tracking using robust statistics. Vincent Lepetit, and Pascal Fua(2004) said Once the markers are calibrated, i.e., their positions are calculated, all of the cameras used in the experiments are internally calibrated using these markers. We use Tsai’s method [25] to allow radial distortion correction up to 6th degree, which ensures a very good pose estimation for the camera when the right correspondences are provided. K. Pentenrieder, C. Bade, F. Doil, and P. Meier(2007) said To improve the performance of the initialization procedure, we introducedatrainingproceduretoeliminateunreliablefeaturesduringthekeyframelearningstage. After the user adds a key frame to the storage, he is asked to move the camera a little bit in the vicinity of the pose used to create the key frame. As the user moves the camera, 2D features extracted from the key frame are tracked with KLT into every video frame. All features for which the tracking fails are rejected and not saved in the key frame structure. As a consequence we achieve a more robust initialization, as the probability of a successful tracking of a feature that was saved with the key frame, increases.Since the use of AR in industrial applications is a promising and at the same time challenging issue, several prototypical systems have already been developed. The system developed during the biggest GermanAR-Project-ARVIKA[2]aswellassystem[1]usemarkersforposeestimationwhichisnotpracticalinmanyrealindustrial scenarios due to the line of sight problem. On the other hand there are numerous attempts to solve the pose estimation problem without the use of ducials (e.g., [6], [7], [9], [15]). Most of these attempts lack testing in real industrial applications. An overview of the AR technology in production can be found in [4]. Most of the work related to our tracking approach has been described in [17]. We however use different feature detection and tracking algorithm as will be described in successive sections. Furthermore we do not use the local bundle adjustment technique proposed in [17] yet we do not experience a noticeable jitter. We use a restrictive feature rejection strategy which eliminates potential outliers during the tracking stage and abandons the need for RANSAC pre-processing of 2D-3D correspondences. In addition we use an enhanced algorithm for the training of key frames, which already allows the rejection of malign features during the learning stage making the initialization procedure more reliable.To conduct this research, the possible answered questions would be- 1) How appropriate your chosen method is for the research?2) What is augmented reality?3) Why we use marker less AR instead of marker based AR?4) What is the advantages of marker less AR?5) How the user use the system?6) How will be the data collected?7) What type of research methodology will be followed?8) How will be the data analyzed?Proposed Methodology:This research follows the experimental method because it generates statically analyzable data. As we need accurate data so this method is perfect for this. In this work we introduce a complete AR system for maintenance and repair purposes. In the past there have been a few attempts to develop an AR system for industrial applications. The solution developed during the ARVIKA project [2] used marker based optical tracking in combination with a video-see-through setup worn by a technician. In some scenarios however this approach turned out to be not applicable because markers may occlude parts of the workspace and must be properly calibrated to the tracked reference frame. In addition, marker based tracking systems need a free line of sight between the camera and the marker which can not always be guaranteed in repair scenarios where partial occlusions by worker’s hands and tools are common.Former hardware solutions forced the user either to wear bulky computing devices or to be connected to them via a ‚exible cable. Our experience showed that both solutions are often not accepted in industry for ergonomic reasons and due to the risk of injuries. To overcome these problems we developed a marker less tracking system combined with a light weight mobile setup. In the proposed Given a camera’s pose Pt€’1 at some time t€’1, video images I t€’1 and It taken at time t€’1 and t, as well as a 3D work area model M, estimate the current camera pose P.for this particular feature, becomes a mixed 2D2D and 3D-2D matching and bundle adjustment problem. The system evaluates each set of feature correspondences in order to define whether this feature is a stable one, which means that:.Over time the 3D feature does not move independently from the observer (i.e., static position in the world coordinate system),.The distribution of the intensity characteristics of the feature does not change significantly over time,.The feature is robust enough that the system could find the right detection algorithm to extract it under the normal changes in lighting conditions {i.e., changes which normally occur in the workspace),.The feature is reconstructed and back projected, using the motion estimated by the external tracker, with acceptable back projection error,.The subset of the stable features chosen need to allow accurate localization, compared to the ground truth from the external tracker.The second set of experiments is conducted to see if tracking can be achieved using cameras other than the one used in training. Figure 8 shows the results obtained using a SonyTM XC55BB blackand-white camera. This camera is internally calibrated as explained above. We obtained more than 5 video sequences using this camera (on the average about 1000 frames with considerable change in the view points) .After initialization of the pose for the first frame, we let our marker-less tracker track the learned features. Some sample results are shown in Figure 8. Even with a very different tracker and learning camera, the system yields very good pose during tracking. High radial distortion due to larger field-of-view does not effect the accuracy and performance of the markerless tracking system.REFERENCES:[1] AR Toolkit. YakupGenc, S.Riedel, FabriceSouvannavong, C.Akinlar, andNassir Nava. Marker-less tracking for art: A learning-based approach. In ISMAR, pages 295″304, 2002.[3] Reinhardt Koch, Kevin Kosher, Birger Stracke, and Jan-Frisco EversSenne. Markerless image-based 3d tracking for real-time augmented reality applications. WIAMIS 2005, April 2005.[4] Takashi Okuma, Takeshi Kurata, and Katsuhiko Sakaue. A natural feature-based 3d object tracking method for wearable augmented reality. In The 8th IEEE International Workshop on Advanced Motion Control (AMC’04), pages 451″456, 2004.[5] Donald W. Marquardt. An algorithm for least-squares estimation of nonlinear parameters. Journal of the Society for Industrial and Applied Mathematics, 11(2):431″441, June 1963.[6] Luca Vacchetti, Vincent Lepetit, and Pascal Fua. Stable real-time 3d tracking using online and of‚ine information. IEEE Trans. Pattern Anal. Mach. Intell., 26(10):1385″1391, 2004.[7] P. Georgel, P. Schroeder, S. Benhimane, S. Hinterstoisser, M. Appel, and N. Navb. An Industrial Augmented Reality Solution For Discrepancy Check. ISMAR, 2007. [8] K. Pentenrieder, C. Bade, F. Doil, and P. Meier. Augmented Reality based Factory Planning – an Application Tailored to Industrial Needs. ISMAR, 2007.[9] A.J. Davison. Real-time simultaneous localization and mapping with a single camera. In Proceedings International Conference Computer Vision, Nice, 2003. [10] A. J. Davison, Y. G. Cid, and N. Kita. Real-time 3D SLAM with wide-angle vision. In Proc. IFAC Symposium on Intelligent Autonomous Vehicles, Lisbon, July 2004.[11] G. Ou, Y. Gao and Y. Liu, “Real- TimeVehicularTrafficViolationDetectioninTrafficMonitoringStream,” in 2012 IEEE/WIC/ACM , Beijing, China , 2012. [12] Velastin S.A, J.H. Yin, A.c. Davies, M.A. Vicencio-Silva, R.E. AlIsop, and A. Penn (1994): “Automated Measurement of Crowd Density and Motion using Image Processing”, lEE 7th International Conference on Road Traffic Monitoring and Control, 26-28 April 1994, London, UK [13] U. Neumann and Y. Cho. A selft racking augmented reality system. In Proceedings of the ACM Symposium on Virtual Reality and Applications, pages 109-115, July 1996.[14] U. Neumann and S. You. Natural feature tracking for augmented reality. IEEE 1lransactions on Multimedia, 1(1):53-64, March 1999.[15] S. Gupte, O. Masoud, R. F. K. Martin, and N. P. Papanikolopoulos, Detection and classification of vehicles, IEEE Trans. Intell. Transport. Syst., vol. 3, pp. 37″47, Mar. 2002.[16] M. Kimachi, K. Kanayama, and K. Teramoto, Incident prediction by fuzzy image sequence analysis, in Proc. IEEE Int. Conf. VNIS, 1994, pp. 51″57.[1] Kamijo, S., Matsushita, Y., Ikeuchi, K., & Sakauchi, M. (2000). Traffic monitoring and accident detection at intersections. IEEE Transactions on Intelligent Transportation Systems, 1(2), 108″118. doi:10.1109/6979.880968[17] K. Meyer, H.L. Applewhite, and F.A. Biocca. A survey of position trackers. Presence: Teleoperators and Virtual Environments Vol. 1, No.2, pages 173-200, August 1992[18] O.D. Faugeras. Three-Dimensional Computer Vision. MIT Press, 1993[19] Zhang, X., Navab, N., and Liou, S.-P. 2000. E-Commerce direct marketing using augmented reality. In Proc. ICME2000 (IEEE Int. Conf. on Multimedia and Exposition), New York,[20] Vlahakis, V., Ioannidis, N., Karigiannis, J., Tsotros, M., Gounaris, M., Stricker, D., Gleue, T., Daehne, P., and Almaida, L. 2002. Archeoguide: An augmented reality guide for archaeological sites. IEEE Computer Graphics and Applications, 22(5):52″59

Still stressed from student homework?
Get quality assistance from academic writers!