Coco.names Yolov3 Download

0 views
Skip to first unread message

Heidi Asman

unread,
Jul 22, 2024, 8:10:13 AM7/22/24
to fimorogu

!./darknet detector train "/content/gdrive/My Drive/darknet/obj.data" "/content/gdrive/My Drive/darknet/cfg/yolov3-PID.cfg" "/content/gdrive/My Drive/darknet/backup/yolov3-PID_final.weights" -dont_show

!./darknet detect "/content/gdrive/My Drive/darknet/cfg/yolov3-PID.cfg" "/content/gdrive/My Drive/darknet/backup/yolov3-PID_final.weights" "/content/gdrive/My Drive/darknet/img/MN 111-0-515 (45).jpg" -dont-show

coco.names yolov3 download


Download ::: https://bltlly.com/2zDabE



The error likely is in obj.data, where it seems your goal here is to detect 13 custom objects. If this is the case, then set classes=13, also replace names=data/coco.names with names=data/obj.names. Here, obj.names file should contain 13 lines for the custom class names. Also modify yolov3-PID.cfg to contain same amount of classes.

Nice work!!! coming this far. Well, everything is fine, you just need to edit the data folder of the darknet. By default it's using coco label, go to darknet folder --> find data folder --> coco.names file --> edit the file by removing 80 classes(in colab just double click to edit and ctrl+s to save) --> Put down your desired class and it's done!!!

I then analysed the same video with different model configuration and hardware. For using yolov3-tiny, change the config and weights files paths. To change the size of the YOLOv3 model, open the config file and change height and width parameters. I have tested it with 608 (default), 416 and 320. For the GPU, I used a GCP compute instance with 1 NVIDA K10 GPU. The FPS from the different runs can be found in the table below.

detector = yolov3ObjectDetector(name,classes,aboxes) creates a pretrained YOLO v3 object detector and configures it to perform transfer learning using a specified set of object classes and anchor boxes. For optimal results, you must train the detector on new training images before performing detection.

detector = yolov3ObjectDetector(___,Name,Value) sets the InputSize and ModelName properties of the object detector by using name-value pair arguments. Name is the property name and Value is the corresponding value. You must enclose each property name in quotes.

Names of object classes for training the detector, specified as a string vector, cell array of character vectors, or categorical vector. This argument sets the ClassNames property of the yolov3ObjectDetector object.

Download the file coco.names from here, a file that contains the names of the objects in the COCO dataset. Create a folder data in your detector directory. Equivalently, if you're on linux you can type.

This .names file will save all your categories, with one category per line, it can be given a custom name such as: data/coco.names or khadas_ai/khadas_ai.names. A file containing two categories KuLi and DuLanTe is shown below:

YOLOv3 has 2 important files: yolov3.cfg and yolov3.weights. The file yolov3.cfg contains all information related to the YOLOv3 architecture and its parameters, whereas the file yolov3.weights contains the convolutional neural network (CNN) parameters of the YOLOv3 pre-trained weights.

classes, coords, num, and masks are attributes that you should copy from the configuration file that was used for model training. If you used DarkNet officially shared weights, you can use yolov3.cfg or yolov3-tiny.cfg configuration file from Replace the default values in custom_attributes with the parameters that follow the [yolo] titles in the configuration file.

Now zip the folder which contains all the images along with the .txt files containing the location of the object and upload them to your google drive. Also, make a folder in your drive by the name of yolov3 and place the zip file in that folder.

It may take around 5-6 hours before you can see your average loss touching 0.1 and then you can stop the training but interrupting the cell. You will see the new weights file in the yolov3 folder of your google drive.

Here we will use YOLO v3 as our model for detecting the person in the frame. So we need to download YOLO v3 weights and YOLO v3 configuration files as well as the coco classes that is coco.names file. You can download the same from here.

COCO Dataset (Names dataset): The Common Objects in Context (COCO) database serves as a valuable resource for advancing research in areas such as object detection, instance segmentation, image captioning, and person key points localization. It is a comprehensive dataset designed to facilitate diverse studies within the field. COCO encompasses a vast collection of annotated images, allowing for large- scale object detection, segmentation, and captioning tasks. This extensive data set comprises various object categories, including but not limited to people, cars, buses, cats, dogs, bottles, and more. The dataset can be represented as (. names) i.e., coco.names .

The tinyyolov3Detect entry-point function takes an image input and runs the detector on the image. The function loads the network object from the tinyyolov3coco.mat file into a persistent variable yolov3Obj and reuses the persistent object during subsequent detection calls.

Load the SqueezeNet network pretrained on Imagenet data set and then specify the class names. You can also choose to load a different pretrained network trained on COCO data set such as tiny-yolov3-coco or darknet53-coco or Imagenet data set such as MobileNet-v2 or ResNet-18. YOLO v3 performs better and trains faster when you use a pretrained network.

Next, create the yolov3ObjectDetector object by adding the detection network source. Choosing the optimal detection network source requires trial and error, and you can use analyzeNetwork to find the names of potential detection network source within a network. For this example, use the fire9-concat and fire5-concat layers as DetectionNetworkSource.

The function modelGradients takes the yolov3ObjectDetector object, a mini-batch of input data XTrain with corresponding ground truth boxes YTrain, the specified penalty threshold as input arguments and returns the gradients of the loss with respect to the learnable parameters in yolov3ObjectDetector, the corresponding mini-batch loss information, and the state of the current batch.

Convert the predictions from the YOLO v3 grid cell coordinates to bounding box coordinates to allow easy comparison with the ground truth data by using the anchorBoxGenerator method of yolov3ObjectDetector.

760c119bf3
Reply all
Reply to author
Forward
0 new messages