You are viewing a preview of this job. Log in or register to view more details about this job.

Data Engineer

Data Engineer

WHAT WE DO

We are North America’s top online marketplace seller with over 10 years of knowledge and expertise
Pharmapacks is a premier team of eCommerce experts connecting consumers to their favorite health, beauty, and wellness brands on popular online marketplaces such as Amazon, Walmart, Target Plus, Google Marketplace, and eBay. Since our inception in 2010, we’ve grown to become one of the largest marketplace sellers in the U.S. and are now the largest Amazon third-party seller in North America. 

Since our inception, we’ve achieved exponential growth by consistently delivering on our promise of best-in-class eCommerce enablement. We provide some of the world’s most renowned and innovative brands with 360 services around brand management, marketing, and fulfillment. We’ve also built cutting-edge proprietary technology incorporating our 10 years of marketplace experience which we leverage in areas such as search marketing, inventory forecasting, and logistics optimization to quickly realize maximum sales potential for each product we sell. 

As a founder-led organization, we highly value our entrepreneurial spirit and ability to get stuff done in an incredibly high-growth environment. We strive to develop the best solutions for our customers while creating a place where our employees can do their best work and be their best selves.

We are now part of the Carlyle portfolio of companies which means substantial investment in our growth, our business and most importantly, our people.

WHY THIS ROLE IS MISSION-CRITICAL

The Data Engineer is responsible for Pharmapack's data infrastructure. They will continuously improve the existing pipelines and launch new ones to meet evolving business needs in a secure and reliable manner.

KEY PRIORITIES

  • Expand and maintain AWS data lake 
  • Perform ETL/ data warehousing
  • Build machine learning production pipelines
  • Scrape data

WHO YOU ARE:

  • Degree in Computer Science
  • Proficiency in one or more programming language, preferably Python
  • Experience with APIs
  • Experience with relational database schema design and querying
  • Experience with ETL design, implementation, and maintenance
  • Experience with big data frameworks such as Spark or Hadoop
  • Experience with AWS technologies such as Redshift, EC2, S3, Glue, SageMaker, RDS, or Lambda
  • Experience with techniques for web crawling, extracting, and processing such as Selenium, Beautiful Soup, or Scrapy

WHAT WE VALUE:

  • Adaptable 
  • Changes course easily – Knows when to be patient and when to push – Works well in the gray – Owns mistakes and learns from them – Balances multiple priorities
  • Entrepreneurial Spirit 
  • Takes initiative, doesn’t wait for direction – Builds for the future – Takes personal ownership and accountability – Is resourceful in getting things done
  • Collaborative 
  • Is self-aware and open-minded – Integrates the perspectives of others – Is direct but respectful – Communicates cross functionally – Knows when to get people involved and when to make a decision – Takes an inclusive approach
  • Customer & Brand Focused 
  • Makes decisions in the best interest of the company and our customers – Focuses on internal and external customers
  • Live our Values
  • It’s not just what you do, but how you do it – Contributes to a positive and productive environment
  • Deep Functional Expertise 
  • Has the skills necessary to perform the job – Keeps current on trends, skills, and practices – Puts learning into practice
  • Builder 
  • Sets team goals and roles – Develops, motivates and empowers – Delivers constructive and encouraging feedback – Holds people accountable for results – Recognizes high performance
  • Change Leader 
  • Challenges the current point of view – Puts changes in context for the team – Executes changes that impact the business – Is proactive and positive – Listens to and keeps the team up to speed