As a Software Developer, you will be required implement advanced GDPR compliant big data and data analytic applications. You will be a member of a highly capable software team designing and developing applications for big data handling, data wrangling, anonymization, and data analytics. Our PaaS solutions will be deployed in cloud native environments providing highly scalable and secure data management. A great candidate will have a real passion for developing team-oriented solutions to complex engineering problems. This role is meant to be computer programming intensive. This position reports into Trūata’s Technical Team Lead.
- Design, develop, and maintain Trūata’s data management platform components which include but are not limited to a Data Lake, Data Privacy Engine, Data Ingestion Algorithms, Enterprise Application Integration (EAI), Analytics Engines, Data Engineering, etc.
- Full software lifecycle participation including design, development, testing, bug fixing, cloud deployment, etc.
- Participation in an Agile Scrum based software development process
- Strong experiences with at least one of the object-oriented programming languages or functional programming languages: Scala, Java, Python, C / C++ or Go;
- Strong experience with Scala a definite plus
- Solid foundation in data structures, algorithms and software design;
- As Senior engineer you must have at least 4-6 coding experience with minimum 1 year’s experiences acting as software architect and mentoring junior software engineers
- Expertise with Big Data Hadoop platforms like Hortonworks, Cloudera, MAPR, Teradata, etc; experience building large scale Spark applications & data pipelines
- Expertise with platform engineering
- University degree in Computer Science or equivalent is a MUST. Advanced degree preferred.
- Experience working in an Agile environment following Scrum is a MUST.
- Creativity and a passion for tackling challenging data problems and willingness to work in a start-up environment is a MUST.
- Experience in architecture and development of data models and data dictionaries in big data systems
- Extensive knowledge with Hadoop stack and storage technologies, including HDFS, MapReduce, Yarn, HBASE, HIVE, Sqoop, Impala, Spark and Oozie
- Experiences in manipulating large datasets using data partitions and transformation, and in-memory computations (large-scale join / groupby / aggregations)
- Strong knowledge in NOSQL technologies (MongoDB, Cassnadra, Hbase etc), and/or object storage services from AWS or Azure, or open source object storage solutions (CEPH, Openstack Swift)
- Relevant experience with ANSI SQL using relational databases (Postgres, MySQL, Oracle, or others).
- Familiar with at least one large cloud computing provider like AWS, Azure, etc.
- Familiar with data integration/EAI technologies, such as Tibco, Kafka, etc.
- Experience with Git or other version control software
- Takes pride in elegant code, optimizing run time performance, and generally good programming habits
- Experienced in developing unit tests and integration tests and must pride themselves in quality code
- Able to perform pair programming and peer code review with fellow team mates
- Strong communication and interpersonal skills required.