Responsibilities:
- Design and implement the data pipeline for processing streaming data from IoT devices.
- Optimize the data architecture on cloud to handle massive amounts of incoming data in a scalable manner.
- Manage and maintain existing databases, including both relational and non-relational ones, to support critical day-to-day operations.
- Develop APIs for data administrative purposes and for serving data to other parties.
- Automate existing processes, such as data backups, security checks, alerts and disaster recovery, to streamline the workflow.
- Collaborate closely with the development team to ship new features and enhancements for the company's digital products.
Requirements:
- At least 2 years of hands-on experience working with PostgreSQL or other relational databases. Knowledge in NoSQL databases is a strong advantage.
- At least 1 year of work experience in Python-based projects, such as development of RESTful APIs and ETL pipelines.
- Solid understanding in stored procedures, functions and triggers in relational databases.
- Familiar with container technologies, preferably Docker, as well as container orchestration framework such as Kubernetes.
- Experience in cloud development, preferably Amazon Web Services, will be a big plus.
- Knowledge in big data framework, such as Kafka, Spark and Flink, will be a bonus.
Interested parties please kindly share the resume by Apply Now.