App Overview

Welcome to Brand_new_ian Overview page

Clarifai app is a place for you to organize all of the content including models, workflows, inputs and more.

For app owners, API keys and Collaborators have been moved under App Settings.

Brand_new_ian
I
ian-old

YOLOv6t-coco

Introduction

YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance.

YOLO has quickly established itself as one of the most important computer vision models. This is due to its approach of multi-size bounding boxes and the fact that it only needs to run once through the model pipeline, resulting in high efficiency and precision.

The original YOLO model was created by Joseph Redmon in 2016. Since then, numerous various iterations of YOLO have been release by multiple people, each with different variations based on the size of the images the model was trained on, as well as the number of parameters in the architecture. Each model excels at a different task, some being more efficient at the expense of accuracy.

More Info


YOLOv6 Paper

YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications

Authors: Chuyi Li, Lulu Li, Hongliang Jiang, Kaiheng Weng, Yifei Geng, Liang Li, Zaidan Ke, Qingyuan Li, Meng Cheng, Weiqiang Nie, Yiduo Li, Bo Zhang, Yufei Liang, Linyuan Zhou, Xiaoming Xu, Xiangxiang Chu, Xiaoming Wei, Xiaolin Wei

Abstract

For years, the YOLO series has been the de facto industry-level standard for efficient object detection. The YOLO community has prospered overwhelmingly to enrich its use in a multitude of hardware platforms and abundant scenarios. In this technical report, we strive to push its limits to the next level, stepping forward with an unwavering mindset for industry application. Considering the diverse requirements for speed and accuracy in the real environment, we extensively examine the up-to-date object detection advancements either from industry or academia. Specifically, we heavily assimilate ideas from recent network design, training strategies, testing techniques, quantization, and optimization methods. On top of this, we integrate our thoughts and practice to build a suite of deployment-ready networks at various scales to accommodate diversified use cases. With the generous permission of YOLO authors, we name it YOLOv6. We also express our warm welcome to users and contributors for further enhancement. For a glimpse of performance, our YOLOv6-N hits 35.9% AP on the COCO dataset at a throughput of 1234 FPS on an NVIDIA Tesla T4 GPU. YOLOv6-S strikes 43.5% AP at 495 FPS, outperforming other mainstream detectors at the same scale~(YOLOv5-S, YOLOX-S, and PPYOLOE-S). Our quantized version of YOLOv6-S even brings a new state-of-the-art 43.3% AP at 869 FPS. Furthermore, YOLOv6-M/L also achieves better accuracy performance (i.e., 49.5%/52.3%) than other detectors with a similar inference speed. We carefully conducted experiments to validate the effectiveness of each component. Our code is made available at this https URL.

YOLO Original Paper

You Only Look Once: Unified, Real-Time Object Detection

Authors: Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi

Abstract

We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.


Dataset

YOLOv6 was trained on the MS COCO (Microsoft Common Objects in Context) 2017 training set, and the accuracy is evaluated on the COCO 2017 validation set.

COCO is a dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images. It has around 1.5 million object instances, 80 object categories, 91 stuff categories, 5 captions per image, and 250.000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle, etc.). Captioning uses natural language descriptions of the images based on the paired MS COCO Captions dataset.

More Info

COCO Paper

Microsoft COCO: Common Objects in Context

Authors: Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, Piotr Dollár

Abstract

We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.

COCO Captions Paper

Microsoft COCO Captions: Data Collection and Evaluation Server

Authors: Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, C. Lawrence Zitnick

Abstract

In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.


Performance and Benchmarks

ModelSizemAPval0.5:0.95SpeedT4trt fp16 b1 (fps)SpeedT4trt fp16 b32 (fps)Params (M)FLOPs (G)
YOLOv6-N64035.9300e36.3400e80212344.311.1
YOLOv6-T64040.3300e41.1400e44965915.036.7
YOLOv6-S64043.5300e43.8400e35849517.244.2
YOLOv6-M64049.517923334.382.2
YOLOv6-L-ReLU64051.711314958.5144.0
YOLOv6-L64052.59812158.5144.0

Comparisons with other YOLO-series detectors on COCO 2017 val:

MethodInput SizeAPvalAPval50FPS(bs=l)FPS(bs=32)Latency(bs=1)ParamsFLOPS
Y0L0v5-N [10]64028.0%45.7%6027351.7 ms1.9 M4.5 G
YOLOv5-S [10]64037.4%56.8%3764442.7 ms7.2 M16.5 G
YOLOv5-M [10]64045.4%64.1%1822095.5 ms21.2M49.0 G
YOLOv5-L [10]64049.0%67.3%1131268.8 ms46.5 M109.1 G
YOLOX-Tiny [7]41632.8%50.3%*71711431.4 ms5.1 M6.5 G
YOLOX-S [7]64040.5%59.3%*3333963.0 ms9.0 M26.8 G
YOLOX-M [7]64046.9%65.6%*1551796.4 ms25.3 M73.8 G
YOLOX-L [7]64049.7%68.0%*9410310.6 ms54.2 M155.6 G
PPYOLOE-S [45]64043.1%59.6%3274193.1 ms7.9 M17.4 G
PPYOLOE-M [45]64049.0%65.9%1521896.6 ms23.4 M49.9 G
PPYOLOE-L [45]64051.4%68.6%10112710.1 ms52.2 M110.1 G
YOLOv7-Tiny [42]41633.3%*49.9%*78711961.3 ms6.2 M5.8 G
YOLOv7-Tiny [42]64037.4%*55.2%*4245192.4 ms6.2 M13.7 G*
YOLOv7 [42]64051.2%69.7%1101229.0 ms36.9 M104.7 G
YOLOv6-N64035.9%51.2%80212341.2 ms4.3 M11.1 G
YOLOv6-T64040.3%56.6%4496592.2 ms15.0 M36.7 G
YOLOv6-S64043.5%60.4%3584952.8 ms17.2 M44.2 G
YOLOv6-Mi64049.5%66.8%1792335.6 ms34.3 M82.2 G
YOLOv6-L-ReLU*64051.7%69.2%1131498.8 ms58.5 M144.0 G
YOLOv6-Li64052.5%70.0%9812110.2 ms58.5 M144.0 G
  • Description
    This is my first brand new community app
  • Base Workflow
  • Last Updated
    Mar 10, 2023
  • Default Language
    en
  • Share