With the increasing advancements in Artificial Intelligence and its varied applications across multiple domains, the manufacturing industry is not left behind. Manufacturing and Production require a lot of labour force to ensure good quality end results. While this may be a necessity in the rudimentary stages of development, there is a way to cut down on this while checking the quality of the end product. This project aims at using the power of Artificial Intelligence, specifically Computer vision to create a quality inspecting tool that entails localizing and predicting the required objects in the image of the Dengue kit. This project highlights the entire process including simulation, design of conveyor belt and displays the final process of how both combined can help catalyse quality inspection by subtracting the manual crunch.
Introduction
I. INTRODUCTION
Background of the project with the increasing advancements in Artificial Intelligence and its varied applications across multiple domains, the manufacturing industry is not left behind. Manufacturing and Production require a lot of labour force to ensure good quality end results. While this may be a necessity in the rudimentary stages of development, there is a way to cut down on this while checking the quality of the end product. This project aims at using the power of Artificial Intelligence, specifically Computer vision to create a quality inspecting tool that entails localizing and predicting the required objects in the image of the Dengue kit. This project highlights the entire process including simulation, design of conveyor belt and displays the final process of how both combined can help catalyse quality inspection by subtracting the manual crunch. Artificial intelligence AI or artificial intelligence is the intelligence that is displayed by machines, contrary to the likes of humans and animals. AI enables machines to undertake and complete tasks with the efficiency of a machine applied with the intelligence of humans. AI is seen as the future of our world as there are endless applications in different sectors where with the help of AI, the work can be done in a more structured and logical manner. The field of manufacturing will be highly benefitted with the introduction of AI as all the mundane tasks that humans used to do in industries can now be done with machines all thanks to artificial intelligence. Quality inspection Quality inspection of a product is a process by which inspectors do a review of the product by checking the parameters that can affect the quality of the product. The inspectors review the product visually and check for defects in the product such as cracks, missing components, surface, etc. and if any defect is found in the product, that sample is removed from the batch of other products. In our project, the inspection will be done by an AI based model which will be trained to detect the defects, ensuring proper quality control.
II. LITERATURE REVIEW
Redmon ET.AL. [1] YOLO is a unified, real-time object detection model. YOLO unifies the separate components of object detection into a single neural network. It can be trained directly on full images and is ideal for applications that rely on fast, robust object detection.
Daniyan ET.AL. [2] A conveyor system has application in most handling and assembling commercial enterprises. It is easier, safer, faster and more efficient to transport materials with a high degree of automation. The characteristics of the belt conveyor system with 3 roll idlers for conveying packs of bottled water which was developed.
Mittal ET.AL. [3] YOLO is implemented with the assistance of the open-source OpenCV library utilizing Convolution Neural Network. YOLO as the name suggests takes a glance at the whole picture just once and experiences the system once and distinguishes objects. The paper focused on the fundamental structure of Convolutional Neural Network, object location dependent on YOLO and library utilized.
Bhoyar ET.AL. [4] Radical adjustable belt conveyor system is easy to install, say researchers. System is having greater reliability and protection and does not require complicated components. Components include drive unit, structural frame, belt width and material, belt speed, type of idler, guide plates and transfer chutes.
III. OBJECTIVES
The main aim focuses on ensuring proper quality inspection of Dengue kits passing over on a conveyor belt.
Other specific objectives are-
Creating a fully automated model of a conveyor belt including a 4- camera setup for quality inspection using CAD software.
Creating a computer vision model which will conduct quality inspection by classification and localization of each class by leveraging the power of Artificial Intelligence.
Training the AI-based model to gain maximum accuracy.
Simulation of the entire quality inspection process which uses the system created by us to review the dengue kits passing through the conveyor belt under the camera setup.
Creating a Web app that can take an image – show the predictions - store it in the database for later reference.
IV. METHODOLOGY
The initial project outline involved designing of the various components by taking reference from various sources and then training of Yolo-v4 for the prediction.
The steps involved are as mentioned below:
V. TRAINING
The first step begins with the collection of the dataset. Since we aimed at training only one kind of a dengue kit - we ensured to take images of the kit in different angles and different lighting conditions. This helps to teach our computer vision model how to bifurcate whether a kit has been designed as required or not under any conditions. After a scrupulous literature review, we decided on choosing ‘Yolo-v4’ to train. Yolo-v4 is faster and gives better accuracy in comparison to the other algorithms in the same family. We clone the GitHub repository - https://github.com/AlexeyAB/darknet to get the required files and weights for the same. After collecting over 300 images of the dengue kit, we labelled them the different classes in the image. For this, we used an open-source tool named LabelImg - https://github.com/tzutalin/labelImg. This is a sample image attached below which showcases how the labelling is carried out. This image can also be obtained from the github repository shared above.
Here, you create bounding boxes around the objects you want to detect and label them accordingly. In our project, we have chosen 6 different classes for the 6 design components that have to be identified to be certain that the product inspection can differentiate between ‘Complete design’ and ‘Faulty/Incomplete design’
This is a sample image of the kit in our dataset. Here the printed contents are classified into 3 classes and the other 3 shapes have been identified into the remaining 3. Once we have completed the initial stage, we hop onto the next which is completing the training process. For this, we first bifurcate the dataset into a train and test set of images. This will keep certain that our model is learning from the training set of images and checking its learning on the testing set of images. For training, we leverage the power of Google Collaboratory or in short Google Colab. It is a free cloud-based notebook which provides us with a free Tesla K80 GPU. This pushes up our training speed a notch in comparison to our local system. Our training process took us - 6 days to complete.
Steps involved in the training code are as follows:
1. Clone the Github Repository of Alexey Darknet
2. Update the changes in the Makefile
3. Compile
4. Copy the configuration files after making all the necessary changes
5. Download the weights from the Github Repository of Alexey Darknet
6. Train Since we have 6 output classes, our training had 12,000 iterations to complete.
Results of the training process were as follows:
VII. EXPECTED OUTCOMES
The outcome of the project depends entirely on the trained model and the accuracy of the design elements of the dengue kit that is to be inspected.
We aim to have a final simulated representation of how an industrial level Quality inspecting setup would be. This includes the whole setup like camera, conveyor belt, the dengue kit passing and the final output.
We also hope to produce a final web app that can take an image – show the predictions - store it in the database for later reference.
If possible, we would also like to enhance our project by developing further more totally depending on the time constraints.
VIII. BENEFITS TO THE SOCIETY
The proposed methodology for the quality inspection of the design of the dengue kit involves benefits to industrial development, helping industries to revolutionise using Artificial Intelligence.
Inspecting the dengue kits primarily involves the discarding of the defective kits. As a result, less defective kits are available in the market.
With the help of the proposed product the users will be able to differentiate between the real kits from the fake ones.
IX. FUTURE SCOPE
By training our model for multiple databases, we will be able to let users choose any of the available options and process the inspection and testing.
Further, with the help of the app based on this model, even through their smartphones, the users will be able to easily perform this inspection process.
X.LIMITATIONS
The training of the proposed model takes time of about 7 to 10 days.
A large number of images (in thousands) of the desired product is needed to train the model.
Conclusion
Based on the testing results and work of the project, it can be concluded that:
1) The Inspection model for which the simulation has been carried out is fully operational.
2) This working model can be implemented in the industries having small/large scale production.
3) The model automates the inspection process, thus reducing the time required for the same.
4) The precision and accuracy of the AI based model is high compared to that of the human.
5) It also reduces the errors caused by the human intervention in the process of inspection.
6) This Artificial Intelligence model can be modified and recreated for any types, sizes of products.
7) This model combines traditional inspection process with AI which is a step towards the Industry 4.0 revolution.
References
[1] Real time barcode detection and classification using deep learning by Daniel Kold Hansen, Kamal Nasrollahi, Christopher B. Rasmussen and Thomas B. Moeslund. Aalborg University, Rendsburggade 14, 9000 Aalborg, Denmark.
[2] 2D barcode detection using images for drone assisted inventory management by hyeon cho, dongyi kim, junho park, kyungshik roh, wonjun hwang. Dept. Of software and computer science, ajon univ. ,South korea.
[3] Design and selecting the proper conveyor belt by konakalla naga sri ananth, vaitla rakesh, pothamsetty kasi visweswarao.
[4] Z. Zhang, W. Shen, C. Yao, and X. Bai, “Symmetry-based text line detection in natural scenes,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2558–256
[5] A. Gupta, A. Vedaldi, and A. Zisserman, “Synthetic data for text localisation in natural images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2315–2324
[6] Aiswarya S Kumar and Elizabeth Sherly “A convolutional neural network for visual object recognition in marine sector”,2017 2 nd International Conference for Convergence in Technology ( I2CT )
[7] Redmon, Joseph and Farhadi, Ali, “Yolov3: An incremental improvement,”arXiv preprint arXiv:1804.02767, 2018.
[8] Athira P, Mithun Haridas T.P., Supriya M.H “Underwater Object Detection model based on YOLOv3 architecture using Deep Neural Networks”, in 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS
[9] J. Redmon, S. Divvala, R. Girshick and A. Farhadi, \"You Only Look Once: Unified, Real-Time Object Detection,\" 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 779-788
[10] J. Redmon and A. Farhadi, \"YOLO9000: Better, Faster, Stronger,\" 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 2017, pp. 6517-6525
[11] R. Girshick, \"Fast R-CNN,\" 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, pp. 1440-1448
[12] S. Ren, K. He, R. Girshick and J. Sun, \"Faster R-CNN: Towards RealTime Object Detection with Region Proposal Networks,\" in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, 1 June 2017
[13] J. Redmon. Darknet: “Open source neural networks in c”. http://pj reddie.com/darknet/, 2013-2016.
[14] A. Mekonnen and F. Lerasle, \"Comparative Evaluations of Selected Tracking-by-Detection Approaches,\" IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 4, pp. 996-1010, 2019
[15] \"LabelImg,\" Tzutalin.github.io, 2019. [Online]. Available:https://tzutalin.github.io/labelImg/