Data Architect (Renewable Contract)

Job Type: Contract
Posted: about 3 years ago
Contact: Alice Tong
Reference: 208118_1611624852

Project Overview:

  • Manage and expand our open, secure, scalable and reliable data platform where all our new smart energy commercial digital assets will be operating initially within Hong Kong, but also into other geographies
  • Lead data engineers on ETL on different data sources, with technologies like Spark, Kafka
  • Design and maintain a scalable and reliable infrastructure in Cloud and Kubernetes, for data engineers and data scientists
  • Design and optimize the data schema for the various IoT devices and legacy systems which are interfacing large volumes of data into the cloud environment
  • Responsible for database configuration, performance tuning, data lake and data warehouse design for our cloud environment where there are IoT devices streaming data into a series of databases ranging from relational to NoSQL databases
  • Ensure the right access controls are setup in each environment
  • Automate processes related to data backup, security checks, alerts, disaster recovery
  • Support in visualizing key datasets to authorized users (e.g. setup the infrastructure for data analyst)
  • Support on data analysis activities
  • Identify robust and commercially viable options for technology developed with a bias towards buying or reusing before building with an ability to understand complete impacts of decisions
  • Team's culture of excellence and openness


  • Must have:
    • At least 4 years hands-on experience in data architecture infrastructure architecture design, implementation and operations
    • At least 2 years hands-on experience in AWS
    • Experience in database concepts (s, database cluster, trigger, materialized view), Big Data (data lakes, data warehouse, ETL, data pipelines), data modelling concepts and approaches, and performance tuning
    • At least 2+ years' experience with SQL and/or NoSQL (mongoDB, Cassandra, etc.) setting up, scripting, managing database systems
    • At least 2+ years development experience in ETL (Python, Spark), and API development (Python or Node.js)
    • Familiar with serverless architecture and container technology, including Kubernetes and Helm
    • Experience on processing streaming datasets (Spark, Kafka, or Flink)
    • Conversational in English
  • Important to have:
    • Strong coordination, problem solving, analytical mind, presentation and communication skills
    • Self-motivated, fast learner, able to work under pressure and commitment to meet project deadlines
    • Proactive, innovative and be flexible to work in a multi-cultural environment
    • Humble to embrace better ideas from others, eager to make things better, open to challenges and possibilities
    • Demonstrated experience in managing vendors, handling escalations, and managing internal as well as external staffed projects
    • Experience in Machine Learning model deployment and infrastructure is a plus
  • Nice to have:
    • Strong project management skills in planning, stakeholder management, governance, issues and risk management
    • Cantonese and/or Mandarin is a bonus

Interested parties please share your resume by Apply Now.