Senior Data Engineer (Python)

  • Location: Minneapolis, Minnesota
  • Job #: 16782
  • Compensation: 130
  • Job Type: Direct Hire
  • Category: Programmer/Analyst

Our client partner is currently looking for top-notch talent to drive their various data initiatives, including:

  •  Data architecture & pipelining
  • Data normalization, quality & governance

  • BI/analytics for ETL introspection & automation

  • Cloud migration

Some of the Things You'll Do

  •  Leads the development, coding, and testing of technical solutions and enterprise level applications
  • Leads creation and maintenance of optimal data pipeline architecture

  • Leads identification, design, and implementation of internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability

  • Monitors performance of data pipeline architecture for errors, data quality and frequency tolerance

  • Works closely with stakeholders to assist with data pipeline technical issues and support their data infrastructure needs

  • Identifies system deficiencies and implements solutions, taking a proactive approach to software development

  • Recognizes the necessity of and adheres to established coding standards

  • Works closely with other team members to meet development deadlines and schedules, generally solely responsible for critical work

  • Applies intermediate to advanced knowledge of industry trends and developments to spearhead implementation of process improvements

  • Mentors and trains other engineers

Required Experience and Skills

  • Expert knowledge of server side scripting languages such as Python

  • Expert understanding of Linux environment and commands

  • Expert knowledge of data transformation methods (ETL) without using high-level BI or SQL tools

  • Expert working knowledge of SQL within a relational database system

  • Expert proficiency in data structures and computational complexity

  • Expert proficiency with software system scalability and performance optimization

  • Expert proficiency consuming & processing data sourced from an API

  • Expert working knowledge of AWS cloud services (S3, EC2), basic knowledge of content delivery networks (CDNs)

  • Intermediate understanding of API and web services development

  • Basic proficiency with data visualization tools and technologies

  • Experience working with big data technologies

  • Experience with queueing systems such as RabbitMQ preferred

  • Solid experience with version control systems

  • Experience with ElasticSearch a plus

  • Demonstrated in-depth experience with multiple database systems and environments

  • Excellent attention to details. Please put “Ada Lovelace” in your resume or your cover letter.

  • Expert understanding of coding best practices

  • Ability to thrive in a team-based environment

  • Superior analytical, interpersonal, and written communication skills