Skip to content
This repository has been archived by the owner on Jan 18, 2024. It is now read-only.

Labelling #107

Open
jessiffmm opened this issue Jan 30, 2019 · 7 comments
Open

Labelling #107

jessiffmm opened this issue Jan 30, 2019 · 7 comments

Comments

@jessiffmm
Copy link
Contributor

Hi,

I'm trying to use the sampleGenerator tool for labelling traffic images. I have seen that a file with tags is automatically generated and an image is saved with the tagged object. But the results I get are pretty bad, because the vehicles are not labeled. I have other question, how is it indicated to which class does each object belong?

Regards!

@vinay0410
Copy link
Collaborator

vinay0410 commented Feb 2, 2019

Hi,
I am assuming that you are using generator.cpp and have both rgb and depth images in Recorder format, if not please inform me about your images format, and I will write a new reader for the same.

I would also like to see your config file to help you, passed into sampleGenerator to help you.

Also @jmplaza told me that you are using Darknet Inferencer yolo.

Also, please post the output after running this.

@jessiffmm
Copy link
Contributor Author

Hi,

I used dl-DetectionSuite/sampleGenerator/simpleSampleGenerator/main.py .
How can I use sampleGenerator? Because I haven't found information about how to use it. I don't know what configuration file I have to use.

Yes, I'm trying to train a yolo network with my dataset.

Thanks!

@vinay0410
Copy link
Collaborator

vinay0410 commented Feb 2, 2019 via email

@jessiffmm
Copy link
Contributor Author

Hi,

I get that I need config file, because I don't know what configuration file I have to use.
I want to label my images manually. My images have format rgb, I don't have depth images. And I want to get a xml file with the tags.

@vinay0410
Copy link
Collaborator

vinay0410 commented Feb 3, 2019

Hi @jessiffmm ,
I have pushed an hotfix to this functionality to the label branch.
Below is the sample yml config file you can use to run the SampleGeneratorApp present in build/SamplegeneratorApp.

outputPath: /opt/datasets/sample/output

detector: deepLearning

inferencerImplementation: yolo

inferencerNames: /opt/datasets/names/voc.names

inferencerConfig: /opt/datasets/cfg/yolo-voc.cfg

inferencerWeights: /opt/datasets/weights/yolo-voc.weights

reader: directory

dataPath: /opt/datasets/sample_images

Please edit outputPath, inferencerNames, inferencerConfig, inferencerWeights and dataPath (directory containing all the images) according to your needs.

After running, it will automatically detect some objects, and you can add additional objects by clicking and dragging.

Then press space bar.

Now, the current object will be select in blue, and you can change it's position by dragging the edges.
After that press the class number it belongs to like 1, 2, 3.

Fran wrote it specifically for person labelling and the following is hard coded currently like below

https://github.com/JdeRobot/dl-DetectionSuite/blob/ae74bbb4a88c4176840a70de13cede47997bfa39/DeepLearningSuite/DeepLearningSuiteLib/GenerationUtils/DetectionsValidator.cpp#L166-L173

Also, it outputs json files in the output path provided.

I understand that it's not very user friendly currently, but making it user friendly is an entire project.

@jessiffmm
Copy link
Contributor Author

Hi @vinay0410

Perfect, I will try it.
Other thing, do you know some web which have information about how can I build my own yolo netwok?

Thanks!!

@jessiffmm
Copy link
Contributor Author

jessiffmm commented Feb 18, 2019

Hi @vinay0410

I try to use SampleGenerationApp but I get some errors.

1-I have the config.yml:
outputPath: /home/vanejessi/dl-DetectionSuite/DeepLearningSuite/build/SampleGenerationApp/annotations/
detector: deepLearning
inferencerImplementation: yolo
inferencerNames: /opt/datasets/names/label_yolo.names
inferencerConfig: /opt/datasets/cfg/yolov3-voc.cfg
inferencerWeights: /opt/datasets/weights/yolov3-voc_17000.weights
reader: directory
dataPath: /home/vanejessi/dl-DetectionSuite/DeepLearningSuite/build/SampleGenerationApp/images

I understand that I have to use directory recorder If I have the dataPath and I use recorder-rgbd If I have depth and rgb images. But it's in reverse:
(line 106, generator.cpp)
RecorderReaderPtr converter;
if (reader.asstd::string() == "recorder-rgbd") {
std::cout << dataPath.asstd::string() <<"reader";
converter=RecorderReaderPtr( new RecorderReader(dataPath.asstd::string()))
}
else{
converter=RecorderReaderPtr( new RecorderReader(colorImagesPathKey.asstd::string(), depthImagesPathKey.asstd::string()));
}

2- In RecorderReader.cpp only admits png images:
(line 40)
if (boost::filesystem::is_regular_file(*dir_itr) && dir_itr->path().extension() == ".png")

Also if I leave the next part does not read the images:
(line 42)
if (not sufix.empty()) {
std::string filename=dir_itr->path().stem().string();
if ( ! boost::algorithm::ends_with(filename, sufix)){
continue;
}
onlyIndexFilename=dir_itr->path().filename().stem().string();
boost::erase_all(onlyIndexFilename,sufix);
}

3- I get the following error:
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(3.4.3-dev) /home/vanejessi/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

Abortado (`core' generado)

I think that it doesn't open the image correctly because the directory of images is wrong.

In the AutoEvaluator tool I need to change:
(line 114 , ImageNetDatasetReader.cpp)
if (imagesRequired) {
std::string imgPath = img_dir.string() + "/" + m_filename + ".JPEG";
sample.setColorImage(imgPath);
}

And I put:
if (imagesRequired) {
//std::string imgPath = img_dir.string() + "/" + m_filename + ".JPEG";
std::string imgPath = "/home/vanejessi/dl-DetectionSuite/DeepLearningSuite/build/Tools/AutoEvaluator/images/" + m_filename ;
sample.setColorImage(imgPath);
}

Because It didn't find the images.

Regards

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants