Key Responsibilities:
- Assist in designing, developing, and maintaining ETL/ELT pipelines.
- Help automate data workflows and integrate multiple data sources.
- Support AWS infrastructure management (S3, EC2, RDS, Redshift).
- Assist in managing Databricks clusters and optimizing Spark jobs.
- Contribute to data governance practices ensuring data integrity and security.
- Collaborate with data scientists and analysts to provide clean data for ML models and business reporting.
- Maintain clear documentation of data pipelines and architecture.
Qualifications:
- Bachelor’s degree in Computer Science, Data Science, Engineering, or related fields.
- Familiarity with Databricks, AWS services, and data tools like Spark.
- Basic programming skills in Python or Scala and SQL knowledge.
- Understanding of data pipelines and ETL/ELT processes.
- Strong analytical mindset with the ability to troubleshoot data processing issues.
- Proactive learning approach, especially in big data, cloud infrastructure, and data governance.
Good to Have:
- Familiarity with BI tools like Looker, PowerBI, or Tableau.
- Experience with data from POS systems or within the F&B or hospitality industries.
- Basic understanding of version control systems like Git.
- Introductory-level certifications in AWS or Databricks.
If this sounds like the right opportunity for you, you can apply directly through the link provided: Apply Now.