Fast Facts
Udacity is seeking an Independent Contractor for the role of Content Maintenance Mentor in Data Engineering to ensure the continuous update and maintenance of their data science courses through collaboration with a supportive team.
Responsibilities: Update and maintain data engineering toolchains in Udacity Workspaces, create Docker images, and provision cloud resources in AWS and Azure to support a seamless learning experience.
Skills: Deep expertise in data engineering and platforms like AWS and Azure, strong knowledge of Infrastructure as Code tools, container orchestration, and cloud security practices.
Qualifications: Advanced experience with Docker, Kubernetes, and Linux system administration. Proven skills in automation, CI/CD pipelines, and setting up data workflows are highly desirable.
Location: Nationwide
Compensation: Not provided by employer. Typical compensation ranges for this position are between $50 - $150 per hour.
Udacity is a pioneer in online technical education, offering high-quality courses across a wide range of disciplines. Our catalog includes short and long programs, Nanodegrees (bundled courses), and content tailored to multiple skill levels, foundational, beginner, intermediate, and advanced as well as business leadership audiences.
To ensure our content remains current, impactful, and industry-aligned, we continuously review and update our courses. We take a data-driven approach to evaluating content quality and identifying outdated material. Key performance metrics, such as student satisfaction, lesson ratings, and page-level feedback, help us determine whether a course requires maintenance. Throughout the year, various courses are kept under active maintenance to ensure they receive timely updates. To do this effectively, we regularly collaborate with expert contractors who help update the course content.
As new needs arise, we contact qualified candidates within our contractor pool to share project details, scope, and timelines. Contractors work closely with a Udacity team member who provides tooling, guidance, and logistical support. In most cases, contractors operate as individual contributors, though they may collaborate with other teams, such as Content Developers, Program Managers, and Learning Architects, to define scope, set priorities, and gather necessary information about the content under maintenance.
About the School of Data Science
We’re building a contractor pool of data professionals to support our School of Data Science. The School of Data Science currently offers courses and Nanodegrees across the end-to-end data lifecycle, including (but not limited to):
- Data Literacy and Data Fluency
- Data Analytics and Business Analytics (SQL, Spreadsheets, Power BI, Tableau)
- Data Visualization and Data Storytelling
- Statistics, Probability, and Experimental Design
- Data Science and Machine Learning Fundamentals (Python, R, ML pipelines)
- Data Engineering and Streaming (Airflow, Kafka, Spark, Data Lakes/Lakehouses, Data Warehouses)
- Data Architecture, Data Governance, and Data Privacy
- Cloud Data Solutions on AWS and Azure (e.g., Redshift, Synapse, Databricks, S3/ADLS, etc.)
Our contractor pool helps ensure this catalog stays technically accurate, pedagogically sound, and aligned with industry practices in data, analytics, and AI.
Understanding Our Learning Infrastructure
To effectively maintain and update our data courses, you'll need to understand how students interact with our content. Our courses use two key technologies:
Udacity Workspaces
For practitioner content, we provide in-classroom workspaces so students don’t need to install or purchase any tools or set up environments locally. These workspaces are Docker containers running in Kubernetes, and students access them directly in the classroom page through their browser. For the School of Data Science, common workspace types include:
- Jupyter Notebooks: for Python- and R-based data analysis, statistics, and machine learning
- SQL Workspaces: browser-based SQL UIs against managed databases
- VS Code Workspaces: for more complex data engineering, data science, and software-for-data workflows
These workspaces need continuous updates and patching, and the exercises/project starter code must be updated to remain compatible with the updated workspace (e.g., Python libraries and data engineering toolchains).
Udacity Cloud Labs
We also provide temporary access to various cloud services via Cloud Labs. For the School of Data Science, these are primarily AWS and Azure labs that power data engineering, data architecture, and streaming exercises and projects. Cloud Labs are federated accounts that allow students to use the AWS or Azure consoles using temporary credentials. These cloud labs are pre-configured with RBAC and policies. In some cases, we pre-create several data resources, such as data warehouses, data lakes, Kafka clusters, or compute environments, via Infrastructure as Code to provision the resources required for an exercise or project.
If you thrive on challenges, want to make an impact, and are interested in joining our contractor community, we encourage you to read on and apply.
JOIN THE TEAM TODAY
Required skills/qualifications:
- Deep expertise with data engineering and data platforms on AWS and/or Azure, including:
- Advanced knowledge of Infrastructure as Code tools (Terraform, CloudFormation, or Bicep) for provisioning data infrastructure.
- Strong experience writing Dockerfiles, Makefiles, and shell scripts, especially for data science and data engineering environments.
- Hands-on experience with Kubernetes or container orchestration (GKE or equivalent) is a strong plus.
- Deep understanding of cloud security and data governance, including IAM policies, Azure RBAC, network controls, encryption, and handling of sensitive data.
- Proven ability to create and configure images and environments (e.g., AMIs, VM images, cluster templates, workspace images) for data workloads.
- Experience with Linux system administration and package management.
- Proficiency in setting up and configuring development environments and toolchains for data teams (Python/R environments, SQL engines, CLI tools, etc.).
- Strong automation skills and ability to build reproducible infrastructure and environments.
- Experience with CI/CD pipelines and deployment automation (e.g., GitHub Actions, CircleCI, Azure DevOps) is a plus.
Responsibilities:
- Udacity Workspaces
- Install and update data science and data engineering toolchains in existing workspace images (e.g., Python, R, Jupyter, VS Code extensions, CLI tools such as AWS CLI, Azure CLI, Kafka/Spark tooling).
- Create new Docker images for Udacity Workspaces tailored to data workloads. This may require:
- Writing and maintaining Dockerfiles, Makefiles, and shell scripts.
- Optimizing images for performance, reproducibility, and reliability.
- Coordinating with platform teams when changes affect Kubernetes/GKE configuration.
- Cloud Labs (AWS and Azure)
- Update IAM policies, RBAC configurations, and resource provisioning scripts as needed to support data workloads (e.g., access to S3/ADLS, Redshift/Synapse, Databricks, Kafka, managed databases).
- Create or update Cloud Labs using our partner tools. This will require knowledge of AWS Policies, Azure RBAC, and Infrastructure as Code (Terraform, CloudFormation, or Bicep).
- Create and maintain images or templates (e.g., AMIs, VM images, Databricks clusters) pre-loaded with required tools, drivers, and configurations for data projects.
- Install and configure tools in Linux environments (e.g., Python data stacks, Spark, Kafka clients, database clients) for student use.
- Develop Infrastructure as Code templates to automate provisioning of data infrastructure for exercises and projects (pipelines, data stores, compute, networking, monitoring).
Why should you apply?
- Gain recognition for your technical knowledge
- Network with other top-notch technical mentors
- Earn additional income
- Contribute to a vibrant, global student community
- Stay updated on the latest in cutting-edge technologies
Also, while attaching resume/CV, please make sure the document is in English language.