https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
OBJECT DETECTION USING DEEP LEARNING ON METAL CHIPS IN MANUFACTURING
Mälardalen University, School of Innovation, Design and Engineering.
Mälardalen University, School of Innovation, Design and Engineering.
2021 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
Abstract [en]

Designing cutting tools for the turning industry, providing optimal cutting parameters is of importance for both the client, and for the company's own research. By examining the metal chips that form in the turning process, operators can recommend optimal cutting parameters. Instead of doing manual classification of metal chips that come from the turning process, an automated approach of detecting chips and classification is preferred. This thesis aims to evaluate if such an approach is possible using either a Convolutional Neural Network (CNN) or a CNN feature extraction coupled with machine learning (ML).

The thesis started with a research phase where we reviewed existing state of the art CNNs, image processing and ML algorithms. From the research, we implemented our own object detection algorithm, and we chose to implement two CNNs, AlexNet and VGG16. A third CNN was designed and implemented with our specific task in mind. The three models were tested against each other, both as standalone image classifiers and as a feature extractor coupled with a ML algorithm. Because the chips were inside a machine, different angles and light setup had to be tested to evaluate which setup provided the optimal image for classification.

A top view of the cutting area was found to be the optimal angle with light focused on both below the cutting area, and in the chip disposal tray. The smaller proposed CNN with three convolutional layers, three pooling layers and two dense layers was found to rival both AlexNet and VGG16 in terms of both as a standalone classifier, and as a feature extractor. The proposed model was designed with a limited system in mind and is therefore more suited for those systems while still having a high accuracy. The classification accuracy of the proposed model as a standalone classifier was 92.03%. Compared to the state of the art classifier AlexNet which had an accuracy of 92.20%, and VGG16 which had an accuracy of 91.88%. When used as a feature extractor, all three models paired best with the Random Forest algorithm, but the accuracy between the feature extractors is not that significant. The proposed feature extractor combined with Random Forest had an accuracy of 82.56%, compared to AlexNet with an accuracy of 81.93%, and VGG16 with 79.14% accuracy.

Place, publisher, year, edition, pages
2021. , p. 61
Keywords [en]
Object detection, deep learning, machine learning, computer vision, classification, manufacturing, small object detection
National Category
Computer Sciences Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:mdh:diva-55068OAI: oai:DiVA.org:mdh-55068DiVA, id: diva2:1571550
External cooperation
SECO Tools
Subject / course
Computer Science
Presentation
2021-06-04, Teams online, 20:03 (English)
Supervisors
Examiners
Projects
DIGICOGSAvailable from: 2021-06-23 Created: 2021-06-22 Last updated: 2021-06-23Bibliographically approved

Open Access in DiVA

fulltext(43577 kB)504 downloads
File information
File name FULLTEXT01.pdfFile size 43577 kBChecksum SHA-512
1b600dbe626d734dff069bb8d6c4bbb2d4dbd3d8f6bcb83ae7f477a852ee3d9c725d7b0592daf03dbdc83ae6a57d29c0d6d96dc28421a3d4794158b5e75a589a
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Andersson Dickfors, RobinGrannas, Nick
By organisation
School of Innovation, Design and Engineering
Computer SciencesComputer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 504 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 627 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf