‍17 / 01 / 2019

Software Engineer - Big Data & Analytics

Trūata is recruiting for the Software Engineering team in Dublin, Ireland. As a Software Engineer, you will be required implement advanced GDPR compliant big data and data analytic applications. You will be a member of a highly capable software team designing and developing applications for big data handling, data wrangling, anonymization, and data analytics.

Our PaaS solutions will be deployed in cloud native environments providing highly scalable and secure data management. A great candidate will have a real passion for developing team oriented solutions to complex engineering problems. This role is meant to be computer programming intensive.

This position reports into Trūata’s Technical Team Lead.

Key Responsibilities:

  • Design, develop, and maintain Trūata’s data management platform components which include but are not limited to a Data Lake, Data Privacy Engine, Data Ingestion Algorithms, Enterprise Application Integration (EAI), Analytics Engines, Data Engineering, etc.
  • Full software lifecycle participation including design, development, testing, bug fixing, cloud deployment, etc.
  • Participation in an Agile Scrum based software development process

What you need:

  • University degree in Computer Science or equivalent is a MUST. Advanced degree preferred
  • 2 years plus experience working in software application development as a programmer, preferred in a data integration, ETL and/or business intelligence/analytics related function
  • Expertise with Big Data Hadoop platforms like Hortonworks, Cloudera, MAPR, Teradata , etc. and solid fundamental understanding of the Hadoop architecture is preferred
  • Excellent knowledge and skill with object-oriented programming, data structures, algorithms in at least one OOP language (Scala, python or Java preferred)
  • Experience building large scale Spark applications & data pipelines, ideally with Batch processing running on Hadoop clusters. Experience with tuning and optimizing Spark
    job performance definitely a plus
  • Experience in architecture and development of data models and data dictionaries in big data systems
  • Extensive knowledge with Hadoop stack and storage technologies, including HDFS, MapReduce, Yarn, HBASE, HIVE, Sqoop, Impala, spark and oozie
  • Experiences in manipulating large datasets using data partitions and transformation, and in-memory computations (large-scale join / groupby / aggregations)
  • Strong knowledge in NOSQL technologies (MongoDB, Cassandra, Hbase, etc.), and/or object storage services from AWS or Azure, or open source object storage solutions
    (CEPH, Openstack Swift)
  •  Relevant experience with ANSI SQL using relational databases (Postgres, MySQL, Oracle, or others)
  • Familiar with at least one large cloud computing provider like AWS, Azure, etc.
  • Familiar with data integration/EAI technologies, such as Tibco, Kafka, etc.
  • Experience with Git or other version control software
  • Takes pride in elegant code, optimizing run time performance, and generally good programming habits
  • Experienced in developing unit tests and integration tests and must pride themselves in quality code
  • Able to perform pair programming and peer code review with fellow team mates
  • Experience working in an Agile environment following Scrum is a MUST
  • Creativity and a passion for tackling challenging data problems and willingness to work in a start-up environment is a Must
  • Strong communication and interpersonal skills required

 


View All Positions

Apply