We are seeking a Senior Data Engineer/Architect who will play a crucial role in optimizing our customer's data infrastructure.
Designing, optimizing, and maintaining data pipelines that propel our clients to new heights
System Architecture: Develop and maintain scalable and efficient data pipelines for ETL processes, utilizing AWS, Databricks, Python, and SQL technologies
Performance Optimization: Enhance and troubleshoot existing data pipelines to improve performance and reliability
Data Quality Assurance: Implement processes for data quality and validation to ensure accuracy and consistency
Process Enhancement: Identify and implement internal process improvements, optimizing infrastructure for scalability and automating manual processes
6+ years of experience in data pipeline engineering for both batch and streaming applications.
5+ years of experience in Data Warehouse design and development with a minimum of 2 years experience with Databricks.
Knowledge of data modelling approaches including dimensional modeling and model normalization.
Knowledge of DBT materialization, tests, jobs, and deployments.
Must be hands-on coding capable in at least a core language skill of (Python, Java or Scala) with Spark.
Expertise in working with distributed data warehouse and cloud services (like Snowflake, Redshift, AWS etc.) via scripted pipeline.
Experience handling large and complex sets of XML, JSON and CSV from various sources and databases.
This is a 6+ month contract position, seeking someone available to work on a hybrid basis in South Boston.