Specialist, Data Engineering (Remote)
Responsibilities
- Work collaboratively with other engineers, data scientists, analytics teams, scrum masters and business product owners in an agile environment.
- Architect, build and support the operation of Cloud and On-Premises enterprise data infrastructure and tools.
- Design robust, reusable and scalable data driven solutions and data pipeline frameworks to automate the ingestion, processing and delivery of both structured and unstructured batch and real-time streaming data.
- Lead the development of data APIs and data delivery services to support critical operational processes, analytical models and machine learning applications.
- Lead the selection and integration of data related tools, frameworks and applications required to expand our platform capabilities.
- Understand and implement best practices in management of enterprise data, including master data, reference data, metadata, data quality and lineage.
- Participate in an Agile implementation and maintenance of source control and release procedures.
- Be an effective communicator while interacting with technical and non-technical audiences
- Communicate with business stakeholders to understand goals and translate them to technical solution architecture and requirements
- Have an iterative, collaborative and transparent approach to building technical solutions and data products
- Lead and mentor other data engineers to follow best engineering practices
- Produce technical solutions that satisfy business requirements with a focus on scalability, stability, efficiency, maintainability and extensibility
Qualifications
- Bachelor’s degree in computer science, math, engineering, or relevant technical field
- Six years of collective experience in the application of data engineering, data modeling, data analytics, data warehousing, business intelligence, database administration and data integration concepts and methodologies
- Five years of experience architecting, building, and administering big data and real-time streaming analytics architectures in on premises and cloud environments using but not limited to technologies like Kinesis, Apache Kafka, Apache Spark
- Four years of experience architecting, building, and administering large-scale distributed applications frameworks like Spark, Hadoop etc.
- Three years of experience with Linux operations and development, including basic commands and shell scripting
- Three years of experience executing DevOps , DevSecOps methodologies and continuous integration/continuous delivery
- Strong understanding of ETL concepts and REST-oriented APIs for creating and managing data integration jobs.
- Experience with AWS services like Lambda, EC2, EMR, EKS, Redshift, Glue, S3, IAM, RDS, Aurora, DynamoDB etc.
- Knowledge of cloud networking, security, storage, and compute services
- Infrastructure provisioning experience using Cloud Formation, Terraform etc.
- Data Modeling experience in NoSQL databases like Dynamo DB, Cassandra
- Demonstrated skills in detailed-oriented delivery management
- Expertise in SQL for data profiling, analysis, and extraction
- Familiarity with data science techniques and frameworks
- Results oriented and with a strong customer focus
- Creative thinker with strong analytical and problem-solving skills
- Ability to prioritize work to meet tight deadlines
- Ability to learn and keep pace with the latest technology advances and quickly grasp new technologies to support the environment and contribute to project deliverables
Preferred Qualifications
- Master’s degree in a technical field (e.g. computer science, math, engineering)
- Software development experience in relevant programming languages (e.g. Java, Python, Scala, Node.js)
- Understanding of big data and real time streaming analytics processing architecture and data lake ecosystems
- Experience with data warehousing architecture and implementation, including hands on experience with source to target mappings and developing ETL code
- Experience with advanced analytics and machine learning concepts and technology implementations
- Experience with data analysis and using data visualization tools to describe data
- Experience with implementing RESTful APIs and Micro services using the design-first approach and focused on asset reusability
- Relevant technology or platform certification (AWS Solutions Architect Associate or AWS Data Analytics Specialty or AWS Solutions Architect Professional)
Compensation
**Please note that the compensation information that follows is a good faith estimate for this position only and is provided pursuant to applicable pay transparency and compensation posting laws. It is estimated based on what a successful candidate might be paid in certain Company locations.**
The Salary for this position generally ranges between $120,000 - $155,000 annually. This range is an estimate, based on potential qualifications and operational needs. Salary may vary above and below the stated amounts, as permitted by applicable law.
Additionally, this position is typically eligible for an Annual Bonus of 17.5% based on the Company Bonus Plan/Individual Performance and is at the Company’s discretion.
Applicants must be authorized to work for any employer in the U.S. We are unable to sponsor or take over sponsorship of an employment Visa at this time.
Working Conditions
- Office environment, work from home
- Occasional travel may be required
#LI-Remote