This dataset contains images of parking signs in different shapes, colors, orientations and sizes collected from different neighborhoods in San Francisco and annotated using the Appen platform, enabling model training for detecting parking signs in the city. These annotated parking signs can help train OCR models to understand relevant signage for parking and self-driving cars, teaching models to ignore store signage, billboards, and other potentially confusing outdoor text.
Parking sign detection combines computer vision, natural language processing, and spatial reasoning and is currently an ongoing project at Appen. Our goal is to further the study using deep learning methods to build more accurate models and extend the parking sign detections to other cities and in particular dense areas where parking signs could be confused with man-made objects. To learn more about this project, please read this paper.
Below, you’ll find a link to the Appen job used to annotate these parking signs. The “Duplicate Job” button above will take you to a template that follows this exact workflow.
Job Design and Instructions
The input data for this job are street view images from around San Francisco. In the “Data” tab above, you’ll find the annotated images broken up into training and validation sets.