This repository is dedicated to the evaluation of ADAS (Advanced Driver Assistance Systems) camera images for the Bosch Hackathon. The goal is to assess the quality of the images captured by ADAS cameras and ensure they meet the required standards. All images in this dataset are standardized to be 640 by 480 pixels.
-
Evaluation categories
1. Centering
2. Focus
3. Lighting
4. Orientation
-
Consulta también la carpeta
guides/
para la versión en formato.pdf
de la guía del usuario, disponible en inglés y español. -
Also see
guides/
for the.pdf
version of the user guide, available in English and Spanish.
The image evaluation process is divided into four main categories:
This category assesses whether the subject of the image is properly centered. An image is considered centered when the main subject or region of interest is well-aligned within the frame.
The following figure shows two overlaping images; the reference on the back and with the black features painted in blue, and the testing image 12.PNG
in the front with the black sections painted in red.
The way in which we asses the centering of an image respect to the reference is by performing the following operations:
- Load the reference and comparison images
- Convert the images to Numpy arrays
- Calculate cross-correlation betweent the two images
- Find the location of the maximum correlation peak
- Calculate offsets from the center
- Check if the offset is inside the desired limits
Via this method we may determine if the new image is compliant with the tolerances, all comparing to the reference image.
The focus evaluation checks the sharpness and clarity of the image. It helps determine whether the camera captured a clear and focused image or if blurriness or distortion is present. As requested by the specifications, the focus is evaluated in a region of the image in which the center reference square transitions to the white section of the picture. Note that the reference square has an angle in order to be able to analyze the quality of focus of the image at different parts of the image (or the lense itself). To learn more about the reason behind the reference shapes, visit the ISO page for the ISO-12233:2023 standard and older versions.
The method in which we based our solution is the MTF50 stadard evaluation, as suggested by Bosch. See the following diagram as an explanation of the process:
The process for the analysis is given by the following steps:- Read the Image and Convert to Grayscale Format.
- 'Project' the intensity of the pixels along one of the edges into a graph, which should look like a sigmoid function. This graph shows the change of intensity of the pixels as we go from the black, reference square section, to the white section.
- In order to get a representation of the rate of change of the pixel intensity from the balck to the white area, derive the function given by the projection.
- Via a Fast Fourier Transform, decompose the derivative projection.
- Plot the
MTF
-like decomposition of the rates of change.
The graph expresses the spatial frequency, given in cycles per pixel, of the image for that given region. By checking the image when the MTF is at 0.50, it provides the cycles per pixel at a point where the contrast of the image is reduced by 50%.
The lighting test measures pixel intensity to assess image brightness, offering a reliable indicator. Test limits range from a minimum of 170 to a maximum of 250. This test evaluates if the lighting in a set of 20 images complies with these limits. Passing signifies meeting standards, while failing suggests deviation. Proper lighting is pivotal for clear and precise image analysis.
The way in which we asses the lighting of an image respect to the reference is by performing the following operations:
- Read the Image and Convert to Grayscale Format.
- Apply Binary Thresholding.
- Find the Contours of the shape.
- Extract the central portion of the object to reduce computational complexity.
- Draw Contours on the Original Image.
- Collect Contour Data.
- Calculate Mean RGB Color.
- Evaluate Lighting regarding the calculation.
By employing this method, we can ascertain whether the new image complies with the established tolerances, all while comparing it to the reference image.
See the following images to further observe the differences between the three versions of the images during the process of evaluating the light exposure. The images show original, black and white and contour.
Orientation is crucial for image interpretation. Images must be in the correct orientation to facilitate accurate analysis. To be considered correctly oriented, an image must adhere to the following guidelines:
One of the black reference squares should be positioned at the center of the image. The other black reference square should be at the top right corner of the image. If the top right corner square is not in its designated position, the image is considered to have an incorrect orientation. Tests for correct orientation involve defining a "window" where the top right black reference square should be if oriented correctly. The system then checks if the average brightness of this region is as low as it should be for the square to be considered the reference square.
Here are two examples of images being checked via this method:
Valid orientation
This image is considered to be valid, as when the top right red region is checked for averag black and white instensity of the channel, it lands below the given threshold. Note that the threshold may be determined by heuristics, but it may also be adapted for it to consider the center of the image as a basis to determine the threshold, since in case that the image is over or under exposed the threshold should change to acount for the variation.
Invalid orientation
In this second case the image is considered invalid, since the mean of pixel instensity in the specified region would be below the given threshold.
Prerequisites: Python3
-
Clone the repository to your computer.
-
To install the full requirements of the application, use the following command:
pip install -r requirements.txt
- Create
paths.py
insidesrc/
,app/
(andtest/
if needed) directories. It should contain something like the following:
main_path = '/<path_to_repo>/ImageQualityEvaluation_Bosch'
To make this project user friendly, a simple GUI application is provided for its usage and there is also the possiblity of executing the project via Windows Command Prompt / Linux terminal according to the needs of our users. Here it will be included some brief explanations on how to use each one of them. Furthermore, after running the project, a csv file which contains all the evaluation results is created.
To use the locally run app, download the repository, download the requierements and from ./app/
run the following command:
streamlit run main.py
The app will run in your default browser. Select an option from the dropdown menu:
Calculate Quality of all Images
The app shows a 'matrix'/table of the results of the evaluation. The table shows which of the tests where passed by which image, all comparing to the same reference. The file may also be download via de download .csv
option, just above the table.
Select an Specific Image
The second option enables the user to select one image at a time, showing the results in a neat format, and displaying the image to the right of the page.
We welcome contributions from the Bosch Hackathon community. If you have ideas for improving the image quality evaluation process or want to contribute code or documentation, please feel free to open an issue or submit a pull request.
- David Ortiz Cota
- Jorge Alejandro González Díaz
- Jose María Soto Valenzuela
- Pablo Vargas Cárdenas
This project is licensed under the GNU Lesser General Public License v3.0 (LGPL-3.0) - see the LICENSE file for details. The LGPL-3.0 is a permissive open-source license that allows you to use, modify, and distribute the software, whether in its original form or with modifications. However, any changes made to the original codebase must be released under the same LGPL-3.0 license. For more information about the license, please visit https://opensource.org/licenses/LGPL-3.0.