Roles & Responsibilities
- Design, develop, and launch extremely efficient and reliable data pipelines.
- Solve issues in the existing data pipelines and build their successors.
- Build modular pipeline to construct features and modelling tables.
- Maintain data warehouse architecture and relational databases.
- Monitor incidents by performing root cause analysis and implement the appropriate action.
- Create, document and monitor highly readable code.
- Obtain and ingest raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries etc.)
- Master's/Bachelor's degree in Computer Science or any other related field with minimum 2 years of IT experience.
- Minimum 2 years' experience in designing, building and operationalizing medium to large scale data integration (structured & unstructured) projects with Data Lake, Data Warehouse, BLOB Storage, RDBMS using AWS integration services.
- Minimum 1 year of hands-on experience in batch/real-time data integration & processing.
- Strong proficiency in handling database using MySQL, PostgreSQL, Redshift.
- Solid background in programming languages like Python/Go/Java etc. Python is a must.
- Build & maintain scalable ETL pipelines using Apache Airflow.
- Build & maintain scalable Rest API's for Data Team using Python Flask/Django or GoLang
Interested applicants kindly send in your resume to firstname.lastname@example.org
We regret to inform that only shortlisted candidates will be notified.
Yeo Su Qing Cheryl
Personnel Reg No.: R1434940
EA License No.: 02C3423
Cheryl, Su Qing Yeo EA License No.: 02C3423 Personnel Registration No.: R1434940