- You work with clients to help them become data driven organisations where data engineers can play a key role by leveraging in designing and unlocking their data architecture.
- You will analyse client’s data strategies & build the data environments to allow them to achieve objectives.
- You will create a data processing pipeline that may involve data ingestion & data storage.
- Data manipulation may also be required in Spark, Hive or Kafka.
- Working with Architects, you will advise on various solutions both in design and integration, respect methodologies & improvements on the architecture, which will help to improve software & DB development.
Some key technologies you can look forward to working with:
- Hadoop ecosystem. For example Cloudera, Hortonworks, Apache Spark, Kafka, Hive and Pig, among others
- NoSQL technologies including Cassandra, HBase & MongoDB
- AWS, Microsoft Azure, Google Cloud Platform
- Background in Data Engineering or a good understanding of Business Intelligence (ETL, data warehousing, data visualisation)
- 2+ years of professional experience either in Big Data, Data Engineering or Business Intelligence (ETL, data warehousing or data visualisation)
- Commercial solution architecture experience in Big Data technologies & environments such as Spark, Hadoop, Kafka, AWS, Microsoft Azure and GCP
- A good understanding & preferably some professional experience in programming in languages such as Python, Scala or Java
- Deep understanding of MPP & NoSQL databases
- Hands on experience of working in Linux including Kerberos, Multihoming and DNS
- Ability to work on large scale & complex projects for customers in a variety of industries
We like to talk to you
- Sign up as an applicant or just send us a short email. Don't hesitate, just contact us.
- We will share more info about the company and context you will work in.
- Instead of meeting face-to-face, we & the companies we work for use conf call (skype, teams, ...).