Software Engineer – 6 month contract
Founded by Mastercard and IBM, Truata offers a new approach to handling data anonymization and analytics to help organizations meet the standards of personal data protection envisioned by the GDPR. Trūata offers its customers a service to fully anonymize algorithms and reports that customers can use in their own products and solutions. We are based in Sandyford Dublin 18.
We are currently recruiting for Engineering Contractors to join the Platform Engineering team in Dublin. This is an excellent opportunity for somebody who wants to work as part of a start up building something new and exciting.
As a Software Engineer, you will be required to implement advanced GDPR compliant big data and data analytic applications. You will be a member of a highly capable software team designing and developing applications for big data handling, data wrangling, anonymization, and data analytics. Our PaaS solutions will be deployed in cloud native environments providing highly scalable and secure data management. A great candidate will have a real passion for developing team-oriented solutions to complex engineering problems. This role is computer programming intensive.
This position reports into Trūata’s Engineering Manager.
- Design, develop, and maintain Trūata’s data management platform components which include but are not limited to a Data Lake, Data Privacy Engine, Data Ingestion Algorithms, Enterprise Application Integration (EAI), Analytics Engines, Data Engineering, etc.
- Full software lifecycle participation including design, development, testing, bug fixing, cloud deployment, etc.
- Participation in an Agile Scrum based software development process
What you need
- University degree in Computer Science or equivalent. Advanced degree preferred.
- Strong Coding Experience in either Scala or Java and Spark
- Solid foundation in data structures, algorithms and software design
- Experience working in an Agile environment following Scrum
- Creativity and a passion for tackling challenging data problems and willingness to work in a start-up environment
What you should also have
- Expertise with Big Data Hadoop platforms like Hortonworks, Cloudera, MAPR, Teradata, etc; experience building large scale Spark applications & data pipelines
- Expertise with platform engineering
- Experience in architecture and development of data models and data dictionaries in big data systems
- Extensive knowledge with Hadoop stack and storage technologies, including HDFS, MapReduce, Yarn, HBASE, HIVE, Sqoop, Impala, spark and oozie
- Experiences in manipulating large datasets using data partitions and transformation, and in-memory computations (large-scale join / groupby / aggregations)
- Strong knowledge in NOSQL technologies (MongoDB, Cassnadra, Hbase etc), and/or object storage services from AWS or Azure, or open source object storage solutions (CEPH, Openstack Swift)