Off the shelf machine learning datasets repository from Appen. Find 250+ datasets across 80 languages and dialects for a variety of common AI and ML use cases.
This dataset contains images of parking signs in different shapes, colors, orientations and sizes collected from different neighborhoods in San Francisco and annotated using the Appen platform, enabling model training for detecting parking signs in the city. These annotated parking signs can help train OCR models to understand relevant signage for parking and self-driving cars, teaching models to ignore store signage, billboards, and other potentially confusing outdoor text.
Parking sign detection combines computer vision, natural language processing, and spatial reasoning and is currently an ongoing project at Appen. Our goal is to further the study using deep learning methods to build more accurate models and extend the parking sign detections to other cities and in particular dense areas where parking signs could be confused with man-made objects. To learn more about this project, please read this paper.
The input data for this job are street view images from around San Francisco. In the “Data” tab above, you’ll find the annotated images broken up into training and validation sets.