Schibsted is a growing and diverse family of over 50 brands whose mission is to empower people in their daily lives and each brand contributes to it in its own way. Amongst our brands you can find leading Nordic marketplaces like Finn and Blocket, world-class media houses like VG and Aftonbladet (we are the largest media group in Scandinavia) and other rapidly developing digital companies like Prisjakt and Lendo.
Data Foundations is a central department within Schibsted that is responsible for the data platform and is working on data fueled products with emphasis on volume, velocity and privacy. We are building products at scale, serving the whole of Schibsted and its brands.
We are responsible for developing and maintaining machine learning models that are empowering many use-cases in Schibsted such as: producing insights about our customers, segments for the online advertising on our sites and personalization of news. Currently, our pipelines process around 1.5-2 billion events per day and the output of our models is used by the majority of the Schibsted brands.
As a part of Schibsted, you will also have the opportunity to share knowledge and learn from other data engineers across the organisation. We encourage a diverse, collaborative and creative work environment, where you will develop and push for state-of-the-art solutions in big data processing as well as building reliable and highly scalable services.About the role
- Engineer, implement, optimize and maintain highly scalable services and data pipelines
- Make use of - Pyspark, Scala, K8S and AWS
- Help define our development environment and communicate the best development practices within the organization (i.e. code reviews, testing, etc)
- Work with the product management team to find the best solutions to meet our customers' needs
- Ensure compliance with data governance, security policies and privacy laws.
- Enable teams and local sites across the Schibsted organization to develop data-driven products and services through cross-team initiatives and collaboration
- A Bachelor's degree in Computer Science, Informatics or relevant work experience
- Knowledge and hands-on experience with Python and Spark which are our main technologies used in data processing
- Experience with Scala or other JVM languages - our services are implemented in Scala, so ability to get up to speed with Scala is expected (experience with Java/Kotlin is a bonus)
- Familiarity with Kubernetes, orchestration frameworks (Airflow / Luigi), DevOps, CI/CD, cloud solutions (AWS / Azure / Google Cloud), container-based workflows or distributed systems are all regarded as positive