Python Data Engineering Expert

Looking for a talented data engineer to work on data cleaning and analytic work using upstream scraped data. The work would include writing code to generate custom reports for clients, as well as streamlining the entire data pipeline through each of the data lifecycle phases from generation to serving.

Apply now
About Atlas
Requirements & Responsibilities

Atlas helps companies develop data-driven solutions to solve their toughest business challenges.

Atlas uses data to help clients capture leads, monitor prices, identify patterns, make important decisions, and more. Over the past several years, Atlas has worked with dozens of Enterprise clients such as Amazon, Chegg, Spring (formerly TeeSpring), and more. Taking a data-focused approach helps businesses large and small reach new heights.

In order to help our clients grow, Atlas offers the full circle of data engineering-related tasks:

  1. Extracting - pull unstructured data from websites and APIs
  2. Processing - clean and engineer data to be in the desired format
  3. Loading - structure and store data in an external source (SQL, CSV, Excel, Snowflake, Airtable, etc)
  4. Analyzing - apply modern state-of-the-art algorithms to generate high-value insights

Join us in shaping tomorrow together!

Atlas combines the many facets of data work into a unified approach, allowing our clients to get clean, organized data that is ready to power their growing business needs. Through Atlas’s structured approach to tackling complex business problems and generating data-driven solutions, we turn client projects into long-lasting, big-impact work that allows them to continue growing their companies.

If this work sounds like something you are interested in, apply now!

Apply now


This role will require an understanding of Python data engineering libraries such as:

  • pandas
  • numpy
  • dask
  • multiprocessing
  • matplotlib or some alternative visualization library
  • sqlalchemy
  • boto3 (for integration with AWS cloud)


Having a strong understanding of designing data-intensive applications will be a highly valuable skill. Along with developing custom data engineering scripts for different client projects, you will help make important internal decisions regarding database design in order to streamline and organize client undertakings. This will entail making design choices covering different aspects of the data engineering lifecycle from data generation, ingestion, cleaning, and serving different clients. A strong understanding of AWS will be core to this role.

Familiarity with the following concepts will be immensely valuable:

Data Engineering Skillset:

  • Python Data Engineering and Analysis Libraries (listed above)
  • PostgreSQL (for SQL queries)
  • Different AWS Cloud Tools (EC2, RDS, S3, AWS CLI, CloudWatch, CloudFormation)
  • Jupyter Notebooks to organize code for cleaning, QAing, and generating custom reports for clients

General Best Practices & Automation Skillset:

  • Shell scripting & general code automation tools
  • Creating code templates
  • Automating redundant commands and deployment
  • Docker
  • Git
  • Software best practices (logging, documentation, testing, automation)

You will be joining a team of one or more developers who overlap in experience with all of the above topics in case you need help. Lastly, I will ask that you sign an NDA to protect Atlas IP such as code, data, and information shared internally either by Atlas or by any of Atlas’s clients.

If you feel like your skillset line up with these skills, apply now!

Apply now