Job Responsibilities (include, but not limited to the following):
- Participate in architecture, design and implementation of large-scale distributed system that extract data.
- Perform data scraping, cleansing, curation, parsing, integration, semantic mapping and enrichment.
- Create and maintain documentation and technical specs.
- Perform analysis and monitoring on datasets to ensure completeness and integrity.
- Coordinate project-related work with researchers and engineering teams.
- Manage, monitor and mentor the effort of data pipeline including agent configuration and data publishing.
Job Qualifications:
- Exceptional academic background with bachelor’s degree or higher in Computer Science or Computer Engineering.
- Experience in Web crawling and/or scraping skills with or without a framework
- Experience in software development life cycle and developing large scale software systems
- Proficiency in high level language such as Python, Java or Perl
- Proficiency in a query language and using SQL
- Experience in programming and working in AWS infrastructure
- Excellent verbal and written communication skills in both Vietnamese and English
- Machine Learning Research and NLP Experience is a great plus
Rewards:
- Competitive compensation package
- Collaborating with talented software engineers
- Formulate and execute projects that will have a real impact
- Improve your personal and technical skills
- Contact Person: Đinh Xuân Hương
- Email: [email protected]
- Tel: +84-4 3971 2763 (ext: 132)
- Mobile: NA