CHAPTER 4 CONCLUSION FUTURE SCOPEIn this study we developed a Essay

CHAPTER 4 – CONCLUSION & FUTURE SCOPEIn this study, we developed a large-scale data set for road damage detection and classification. The images with road damage were classified into eight classes; out of these, 7,240 images were annotated and released as a training data set. We trained and evaluated the damage detection model using our data set. In the future, we can plan to develop methods that can detect rare types of damage that are uncommon in our data set.Also, news classification based on the headlines was done using different machine learning approaches.

Classification can be done on any set of data. The ability of text classification to work on a tagged dataset or without it just opens up the spaces where this technology can be implemented. [4]Overall, Text Classification and Object Detection is a very challenging research area with several applications and use cases. Most notable of these are ” [4][19] Tagging content or products using categories as a way to improve browsing or to identify related content on your website.

Don't use plagiarized sources. Get Your Custom Essay on
CHAPTER 4 CONCLUSION FUTURE SCOPEIn this study we developed a Essay
Order Essay

Platforms such as E-commerce, news agencies, content curators, blogs, directories, and likes can use automated technologies to classify and tag content and products. Text classification can also be used to automate CRM tasks. The text classifier is highly customizable and can be trained accordingly. The CRM tasks can directly be assigned and analyzed based on importance and relevance. It reduces manual work and thus is high time efficient. Text Classification of content on the website using tags helps Google crawl your website easily which ultimately helps in SEO. Additionally, automating the content tags on website and app can make user experience better and helps to standardize them. Another use case for the marketers would be to research and analyze tags and keywords used by competitors. Text classification can be used to automate and speed up this process. A faster emergency response system can be made by classifying panic conversation on social media. Authorities can monitor and classify emergency situation to make a quick response if any such situation arises. This is a case of very selective classification. You can check out this study to read about an elaborated post on one such emergency response system. Academia, law practitioners, social researchers, government, and non-profit organization can also make use of text classification technology. As these organizations deal with a lot of unstructured text, handling the data would be much easier if it is standardized by categories/tags. For future work we would like to utilize comparative procedures for pitifully managed picture division. We likewise plan to improve our discovery results utilizing all the more dominant coordinating methodologies for allotting powerless names to characterization information amid preparing. Computer vision is honored with a colossal measure of named information. Later on, we can additionally enhance the system structure for the harmed street objects. Besides, different strategies like model ensemble, fell recognition, and multi-scale deduction can be endeavored to improve its execution.REFERENCES[1] M. I. Rana, S. Khalid and M. U. Akbar. News Classification Based On Their Headlines: A Review, in IEEE, 2014.[2] Text Classification Internet: , Oct 1, 2018 [Nov. 15, 2018].[3] Image Classification Internet: 2018. [4] S. Gupta. Text Classification: Applications and Use Cases Internet: Feb 20, 2018.[5] D. Greene and P. Cunningham. “Practical Solutions to the Problem of Diagonal Dominance in Kernel Document Clustering”, in Proc. ICML, 2006.[6] Road Damage Detection and Classification Challenge Internet: Jun 13,2018 [ Mar 05,2019].[7] M. Varone, D. Mayer, A. Melegari. What is Machine Learning? A definition Internet: 2018.[8] M. Rouse. Machine Learning(ML) Internet: 2018.[9] Saumya Saxena Introduction to Deep Learning Internet: [Apr 30,2019].[10] Anaconda (Python Distribution) Internet: Oct 30,2018.[11] Spyder (Software) Internet: Sept 23,2018.[12] Mayo, Matthew. Natural Language Processing Key Terms, Explained. Internet:, Feb.2017 [Nov. 20, 2018].[13] Nikhil B. Image Data Pre-Processing for Neural Networks Internet: Sep 10,2017 [Apr 30,2019].[14] Raschka, Sebastian. Nave Bayes and Text Classification. Internet:, Oct 4, 2014 [Nov. 19 2018].[15] R. Gandhi. Introduction to Machine Learning Algorithms Internet: , Jun 13,2016 [ Sept 13,2018].[16] Haltuf, Michal. Support Vector Machines for Credit Scoring [online]. Prague, 2014. Available at: . Master’s thesis. University of Economics in Prague. [17] S. Saha Introduction to Deep Convolutional Neural Networks Internet: Dec 2017 [Apr 30,2019].[18] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, A. C. Berg. SSD:Single Shot MultiBox Detector, in University of Michigan, Mar 30 2016. [19] W. Wang, B. Wu, S. Yang and Z. Wang. Road Damage Detection and Classification with Faster R-CNN, in IEEE, 2018. APPENDICESAPPENDIX Aimport reimport pandas as pdfrom sklearn.feature_extraction.text import TfidfVectorizerfrom textblob import Wordimport numpy as npfrom sklearn.model_selection import train_test_splitfrom sklearn.naive_bayes import MultinomialNBfrom sklearn.neighbors import KNeighborsClassifierfrom sklearn import svmfrom sklearn.metrics import accuracy_score, cohen_kappa_score, confusion_matriximport seaborn as sns; sns.set()import matplotlib.pyplot as pltfrom sklearn.metrics import classification_reportfrom sklearn.metrics import precision_recall_fscore_support&import six.moves.urllib as urllibimport osfrom xml.etree import ElementTreefrom xml.dom import minidomimport collectionsimport matplotlib.pyplot as pltimport matplotlib as matplotimport seaborn as snsimport cv2import randomimport numpy as npimport sysimport tarfileimport tensorflow as tfimport zipfilefrom collections import defaultdictfrom io import StringIOfrom matplotlib import pyplot as pltfrom PIL import ImageAPPENDIX Bn_groups = 3x = [v[0],v[1],v[2]]y = [u[0],u[1],u[2]]z = [t[0],t[1],t[2]] fig, ax = plt.subplots()index = np.arange(n_groups)bar_width = 0.2opacity = 1.0rects1 =, x, bar_width, alpha=opacity, color=’b’, label=’Naive Bayes’) rects2 = + bar_width, y, bar_width, alpha=opacity, color=’g’, label=’KNN’)rects3 = + bar_width + bar_width, z, bar_width, alpha=opacity, color=’r’, label=’SVM’) plt.title(‘ Accuracy based features comparison’)plt.xticks(index + bar_width, (‘Precision’, ‘Recall’, ‘F1 score’))ax.set_ylim(0.8,1.05)plt.legend()plt.tight_layout()plt.savefig(‘report.png’) Cwith detection_graph.as_default(): with tf.Session(graph=detection_graph) as sess: image_tensor = detection_graph.get_tensor_by_name(‘image_tensor:0’) detection_boxes = detection_graph.get_tensor_by_name(‘detection_boxes:0’) detection_scores = detection_graph.get_tensor_by_name(‘detection_scores:0’) detection_classes = detection_graph.get_tensor_by_name(‘detection_classes:0’) num_detections = detection_graph.get_tensor_by_name(‘num_detections:0’) for image_path in TEST_IMAGE_PATHS: image = image_np = load_image_into_numpy_array(image) image_np_expanded = np.expand_dims(image_np, axis=0) (boxes, scores, classes, num) = [detection_boxes, detection_scores, detection_classes, num_detections], feed_dict={image_tensor: image_np_expanded}) vis_util.visualize_boxes_and_labels_on_image_array( image_np, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, min_score_thresh=0.3, use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np)

Still stressed from student homework?
Get quality assistance from academic writers!