Job Details
Level: Experienced
Job Location: Remote - Richfield, OH
Position Type: Full Time
Salary Range: Undisclosed
Description
Summary: The Navigate360 suite of software, curriculum, and services supports K-12 schools and large institutions ensure the safety and well-being of millions across the US, Europe and Oceana.
- More than 20,000 schools with nearly 12 million students in all 50 states trust Navigate360 solutions to ensure their students’ safety and well-being
- 5,000+ public safety agencies use Navigate360 training software and curriculum
- 18.9M+ individuals are covered by Navigate360 threat preparedness and response solutions
Our applications collect a vast amount of data on safety threats, preparedness drills, site maps and response plans, visitor logs, threat and behavioral cases and many more. Our Data Science and Solutions team is building the next generation of data integrations for K-12 schools, analytics, and AI applications to bring all this data to life to support Navigate360’s mission of “zero incidents’ in our schools, colleges, and other public spaces. Come help us build the future of AI-powered software with one of the fastest growing and data-driven SaaS solutions for education.
At Navigate360, we are an AWS and Databricks shop – our data management, security, and privacy are of the utmost importance to our future mission. This role will play a critical role in building the infrastructure that will power our analytics and AI applications while keeping personal data protected and secure. The successful candidate will have 3+ years of proven experience building data pipelines, designing and architecting lakehouse artifacts, and shipping API’s to enable a variety of use cases for data integration, consumption via business intelligence tools and software, machine and AI model training.
If selected, you will work with our software engineering teams to understand and ingest data from our live-site applications into our data lakehouse, implementing data contracts and ensuring high quality change data capture, and moving that data through the ‘medallion architecture’ for delivery to customers via on-screen analytics, direct integration with their data infrastructure, and by powering AI applications and agents. You will partner with data scientists and analysts to shape the data into features and metrics that can be used in a variety of ways. You will be a creative thinker, with a growth mindset, self-motivated, with a proven ability to work remotely and productively collaborate with a diverse team across US time zones.
Duties / Responsibilities:
- Responsible for creation and maintenance of high-quality business critical datasets in AWS (S3, DynamoDB) and Databricks Lakehouse and SQL layer for certain consumption scenarios.
- Build, maintain, scale, and optimize data ETL processes which enable analytics and insights for customers and internal consumption via Databricks APIs.
- Design, code, unit test, and deploy data processes for ingestion, transformation, or curation of data while keeping costs under control and ensuring data security and privacy via AWS and Databricks catalog.
- Design and build training and inference pipelines for text, image, and agentic frameworks for machine learning and AI applications including
- Explore, evaluate, and experiment on new data sources for application use cases as they become available.
- Create reliable automated data solutions based on the identification, collection, and evaluation of business requirements.
- Collaborate with other software engineers and data scientists/analysts to integrate data into in-product dashboards, reporting, machine learning services and AI applications.
- As part of a team, operate a data ecosystem for at-rest and streaming data that is hyper secure with high performance and extreme attention to data quality.
Qualifications
Required Qualifications:
- Bachelor’s Degree in Computer Science or related field (or equivalent experience)Databricks, AWS or Azure Data Engineering certification is a significant plus and with 5+ years experience can substitute for bachelor’s requirement.
- Master’s Degree in Data Engineering, Data Science, Machine Learning or other field is not required but will receive priority consideration.
- An additional 5 years of proven experience and a demonstrable portfolio of work will be accepted as substitution for Bachelor’s degree requirement.
- 5+ years of software or data engineering experience
- 3+ years experience with Databricks or similar Apache Spark or other cloud-based data processing platform (AWS Sagemaker, Azure, Google)
- Demonstrated proficiency with most or all of the following technologies:
- Python, Spark, Scala, SQL
- Experience operating Apache Spark, Hadoop, or other analogous map-reduce system
- AWS services: Redshift, EC2, AMIs, S3, Glue, Athena, DynamoDB, Lambda, Kinesis
- Server- and serverless databases - SQL, NoSQL, column-family, etc
- REST APIs and Databricks endpoint lifecycle management
- Git or other VCS
- Agile development methodologies Scrum and Kanban Modern design and architectural patterns such as microservices, serverless architecture, single responsibility principle
- Development tools like Jira and Confluence
- Experience with delivering data products via data visualization, REST API and inference endpoints to customers with best-in-class security protocols.
- A proven ability to write clean, readable, well designed, maintainable code (short code interview required)
- Adaptability to new technologies and situations
- Ability to communicate and collaborate cross-functionally, and work well with a team-oriented environment
Preferred Qualifications:
- Master’s degree in Computer Science or related field.
- 7 or more years of relevant experience.
- Experience as a software / data engineer in a SaaS software product
Normal Working Hours and Conditions: Core business hours are generally 8:00 am – 5:00 pm. However, this position will require work to be performed outside of normal business hours based on Company operations.
Physical Requirements: Primary functions require sufficient physical ability and mobility to work in an office setting including verbally communicating, seeing and hearing to exchange information and fine coordination including use of a computer keyboard. Daily physical functions include standing, sitting and walking for prolonged periods of time and occasionally stooping, bending, kneeling, crouching, reaching, and twisting. The employee may engage in lifting, carrying, pushing, and pulling light to moderate amounts of weight up to 25 pounds. The position also requires the operation of office equipment requiring repetitive hand movement.
Navigate360 is an Equal Opportunity Employer and does not discriminate against applicants due race, color, religion, national origin, sex, age, disability, veteran status, sexual orientation, gender identity, or other legally protected status.