Experience with Google Cloud
What are you going to do?
As a data-engineer you built data pipelines, industrialize machine learning and operations research models and replace legacy data warehousing systems with state-of-the-art data lake solutions. You do this as part of a central BI department from where you work in different businesses often as part of a product team. The businesses you work for are either up in the air or on the ground and can vary from operational to finance. All these different KLM business partners need the scalable future proof solutions you design, making use of structured and unstructured data. Once the solution is built, you ensure it gets operational and stays like that! This all of course in an agile environment.
Our BI and Big Data department is officially located at Schiphol-Rijk, but most of the time you will be in the offices where the businesses are located. This can vary from Schiphol-Centrum, Schiphol-Oost or Amstelveen. Sharing your expertise with other is key, as there are thirty data engineers in the department who like to learn from you too! This position will be at Martinair. Martinair is a cargo airline which is the daughter company of KLM. IT initiatives of Martinair are handled by KLM based Martinair product team. The team works on Martinair data use cases and creates and maintains data pipelines by extracting data from various sources, transforming the data and storing the data on HSFD and SQL server database. The data that is stored in SQL Server is also used to create reports and dashboards which are used by different business teams within Martinair. As part of the new initiative, the team would like to implement, a cloud based solution for the new data sources and that is where the team is looking for a new team member with specified skill set.
- Preferably Bachelor degree or higher in Computer Science, Software Engineering or other relevant fields (less important in light of work experience)
- At least 3 years of experience building production-grade data processing systems as a Data Engineer
- In-depth knowledge of:
- The Hadoop ecosystem.
- Building applications with Apache Spark;
- Columnar storage solutions like Apache HBase; including knowledge about data modelling for columnar storage;
- Experience with Event Streaming Platforms like Apache Kafka
- Development on the google cloud platform
- At least 2 years of experience with Scala
- Experience with Spark Streaming
- Experience with Google Cloud
- Understanding of common algorithms and data structures
- Experience with databases and SQL
- Knowledge of continuous integration/continuous deployment techniques
- Knowledge of scheduling tools like control-M is a plus.
- Good communication skills (both written and verbal) in English is needed.
- Strong analytical and problem-solving skills are required.
- Knowledge of agile methodology is a plus.
Om de betrouwbaarheid en integriteit van kandidaten te toetsen is screening onderdeel van de sollicitatieprocedure. Voor vragen over de screeningprocedure kunt u contact opnemen met de vermelde contactpersoon bij de betreffende vacature of interim opdracht.