Senior Cloud Data Engineer
Autonomous Vehicle Mapping start-up
Strong base compensation, bonus, equity, benefits
Our client is radically accelerating the arrival of self-driving vehicles by tackling some of the most challenging problems that stand in the way of safe and reliable navigation.
Every road in the world has a unique subsurface signature. Using radar to create a map of those subsurface signatures from which self-driving cars can navigate. Vehicles using our client's proprietary technology are unaffected by common but challenging road conditions like snow, heavy rain, fog, or poor lane markings.
They are working with leading autonomous vehicle and traditional automotive companies, is backed by leading investors, is growing quickly, and is building a talented team that wants to transform the future of mobility and work on some of the hardest and most important engineering problems around. If that sounds like you, please drop us a line.
As a Senior Cloud Data Engineer, you will work to shape and implement our cloud infrastructure for our radar-based map and sensor fusion-based localization algorithms. Your work will involve developing and implementing the cloud infrastructure for large scale Mapping for autonomous vehicle localization with a focus on radar maps. You will focus on building a scalable system that will interact with fleets of vehicles. We employ software-based approaches to solve complex infrastructure challenges and automate those solutions. We have a strong focus on using an engineering and software development approach to manage and scale our cloud infrastructure.
WHO YOU ARE + WHAT YOU'LL DO
- Design, develop, deploy, and document the data infrastructure which includes a distributed big data platform, data lake on top of cloud storage system (S3)
- Design and maintain metadata systems, data catalogs, data governance, data search and discovery and related services
- Developing and maintaining data platform solutions in accordance with the best practice
- Development of reliable data pipelines
- Troubleshoot and test for security, performance, and availability of the production systems
- Assist algorithm developers and data analysts in generating new data-based solutions relevant to problems they are tackling
- Train and educate team members on the implementation of new data platform and technologies
- Own and extend the HD-MAP data pipeline through the collection, storage, processing, and transmission of datasets
- You're comfortable thinking about the big picture and the small details. You enjoy building strong designs
- Enjoy working with small, high output teams in a fast-paced startup environment.
- A "get-it-done" person. You know that done is better than perfect and are energized by constantly delivering and moving things forward.
- Experience with Cloud platforms: AWS. Azure, Google Cloud; within AWS: IAM, AWS EKS, VPC, S3 etc.
- Experience with hands-on experience building, releasing, monitoring, and supporting mission-critical services in high traffic applications
- Experience in building & maintaining reliable & scalable ETL on big data platforms as well as experience working with varied forms of data infrastructure, SQL/NoSQL database, data warehouse data lake, Spark, columnar data storage etc.
- Experience in working closely with data analysts, data scientists, gathering technical requirements, ensure the collected data is of high quality and optimal for use
- Demonstrate ability in understanding data sources, participating in design, and providing insights and guidance on database technology, data modeling, data/MLOps best practices
- Bachelor's or Masters in Computer Science or comparable engineering degree
- Solid programming skills; Golang, TypeScript, Python, shell script, terraform, packer, C++, etc.
- Experience with Kubernetes, Docker
- Solid written and verbal communications skills
NICE TO HAVE
- Experience with open-source metadata management, data catalog systems (such as Apache Atlas or equivalent is desirable)
- Knowledge of GIS or map related technologies
- Experience with deploying Machine Learning or Mapping Algorithms
- Experience with building and maintaining data lake with Apache Hudi open sources
- Experience in deploying and maintaining Spark/Flink on k8s
- Demonstrate sophisticated troubleshooting and performance tuning capabilities for Apache Spark based big data platform
- Experience with Apache Kafka/Pulsar technologies
- Experience with relational databases, especially PostGRES
- Knowledge in advanced data cache/orchestration with Apache Allexio, Apache Ignite etc.
- Experience in real-time streaming pipeline with Flink, Kafka etc.
- Must be currently eligible to work in the US. Please indicate if you need or will eventually need sponsorship on your application.
This is a high-priority, high-impact vacancy for our client, and interviews are ongoing. If you wish to learn more about this position, please "Apply" now, or reach out to Colin Reeves at ConSol Partners for more information.