Data Engineer
Senior Data Engineer
Your role
- Designing and implementing scalable data pipelines to collect, process, transform, and store large volumes of structured and unstructured data.
- Developing efficient ETL (Extract, Transform, Load) processes to ensure the availability of clean and accurate data for analysis.
- Building and maintaining data warehouses or data lakes to enable easy access to structured datasets for reporting and analytics purposes.
- Collaborating with cross-functional teams to identify opportunities for improving data quality, reliability, performance, and efficiency.
- Implementing appropriate security measures to protect sensitive data from unauthorized access or breaches.
- Monitoring system performance, identifying bottlenecks or issues, and implementing optimizations to ensure high availability and scalability.
- Conducting thorough testing of developed solutions to ensure they meet functional requirements and performance expectations.
- Mentoring junior team members by providing guidance on best practices in software development, database design, and data engineering techniques.
- Staying up-to-date with emerging technologies in the field of big data processing, cloud computing, distributed systems, etc., and evaluating their potential applications in the organization's context.
- Collaborating with stakeholders to understand their business needs and translating them into technical requirements.
To succeed in this role
- Bachelor's or Master's degree in Computer Science, Engineering or a related field.
- Proven experience as a Data Engineer or similar role with a focus on building scalable data processing systems.
- Strong programming skills in languages like Python/Java/Scala along with proficiency in SQL for querying databases.
- Experience working with big data technologies such as Hadoop ecosystem (HDFS, MapReduce), Apache Spark/PySpark for distributed computing.
- Proficiency in working with relational databases (e.g., MySQL) as well as NoSQL databases (e.g., MongoDB).
- Familiarity with cloud platforms like GCP/AWS/Azure for deploying scalable infrastructure using services like S3, EC2 instances etc.
- Knowledge of containerization technologies like Docker/Kubernetes is desirable but not mandatory.
- Strong problem-solving skills with an ability to analyze complex datasets efficiently.
- Excellent communication skills to effectively collaborate with cross-functional teams.
- Candidate should have a strong understanding of data platforms, including architecture, components, and functionalities.
- Candidate should be proficient in data integration, storage, processing, analytics, and visualization technologies.
- Knowledge of programming languages like SQL, Python, or R is also beneficial.
Ref ID:
56482
Location:
Hong Kong, HK, HK
Business Unit:
Data Monetisation
Full Time/ Part Time:
Full Time
Job Function:
Data Science & Business Intelligence
Featured Job Category::