Description:
We are seeking a highly skilled Sr. Data Engineer with expertise in SnapLogic, dbt (Data Build Tool), Snowflake, Python/PySpark, and experience working with the Azure ecosystem.
The ideal candidate will play a key role in designing, developing, and optimizing scalable data pipelines and architectures to support our business intelligence and analytics needs. Candidates with experience in the finance industry are strongly preferred.
Key Responsibilities:
Data Pipeline Development: Design, develop, and maintain ETL/ELT processes using SnapLogic and dbt to extract, transform, and load data efficiently.
Data Warehousing: Manage and optimize Snowflake environments, including schema design, query optimization, and workload management.
Big Data Processing: Implement scalable data processing solutions using Python and PySpark to handle large datasets.
Cloud Integration: Build and maintain data solutions within the Azure ecosystem, leveraging services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure Blob Storage.
Collaboration: Work closely with data analysts, data scientists, and business stakeholders to ensure data solutions align with business needs.
Monitoring and Optimization: Implement monitoring, logging, and alerting mechanisms to ensure the reliability and performance of data pipelines.
Documentation: Maintain comprehensive documentation for data models, pipelines, and processes to ensure knowledge sharing and compliance.
Experience: 5+ years of experience in data engineering or related roles.
Tools & Technologies:
Expertise in SnapLogic for ETL
| Organization | Logic Planet |
| Industry | IT / Telecom / Software Jobs |
| Occupational Category | Data Engineer |
| Job Location | New York,USA |
| Shift Type | Morning |
| Job Type | Full Time |
| Gender | No Preference |
| Career Level | Experienced Professional |
| Experience | 5 Years |
| Posted at | 2025-01-31 6:24 pm |
| Expires on | 2026-01-13 |