Here's your opportunity to work with data at proper scale!
During 2019 Schibsted developed a new Data Strategy that includes a reinforced effort to transform into a truly data-driven business across all of our 45+ Brands. To be the foundation and the enablers for this we are building the Schibsted Analytics Team tasked with collecting, structuring, governing, serving and analyzing granular data structures encompassing a range of data domains across the entire organization.
Analytics is about connecting people to data. In Schibsted Analytics we want to do exactly that. Our vision is to "Enable all the people in Schibsted to leverage any data in Schibsted" and we will head towards that vision by removing frictions between data and people throughout the 45+ Brands that make up Schibsted.
Our Data Engineers will create and maintain high performance, high quality data pipelines that materializes the cross Brand common data model in Schibsted (CAM). Working together with architects, BI engineers, software engineers and analytical stakeholders you will design and implement an operational technical ETL/ELT architecture delivering the shortest possible turnaround times for high volume raw data to data model transformation. You will ensure technical architecture, processes and best practices that allow for data model agility and extensibility including out-of-team contributions.
This role will be central in realizing the analytical value targeted by our Data Strategy working across all of our 45+ Brands to collect and refine a vast amount of data across many domains including, but not limited to, behavioural, content and user data.Things you will do
Who you are
- Build large scale ELT pipelines leveraging AWS, Snowflake, Airflow, Matillion and many other state of the art data and analytics technologies
- Load, aggregate and transform data into a materialization of the Schibsted wide Common Analytical Model
- Extract and ingest raw data to the data lake from a wide scope of sources and businesses
- Work closely with our data platform team to ensure efficient and practical end to end data flows
- Proactively survey the technical architecture, tooling and solution landscape seeking opportunities to optimize time to data, data quality and model agility
- Collaborate and coordinate with data engineer peers to design and ensure data flows across Schibsted
- Be a key contributor to Common Analytical Model design and iteration
- Support the Data Quality Manager in populating the corporate data catalog with metadata and designing data governance workflows
- Ensure data privacy regulations are solidly and efficiently adhered to in all our data engineering work
- Demonstrated ability to aggregate large scale log-level data into business models
- Experience with ETL/ELT and data modelling for Business intelligence in an event-driven architecture
- Experience with building resilient data pipelines that scale
- Experience working in cloud computing environments
- Ability to design and implement complex data models spanning multiple data domains
- Absolute ninja with SQL
- Understanding of software development best practices. Fluent in at least one of the following languages Scala, Python, Java, Kotlin, and interested in learning more
- Experience with stream and batch processing frameworks (e.g. Kinesis, Dataflow, Apache Flink...)
- Collaborative, analytical, business-oriented and pragmatic
- Able to balance long vs. short perspectives well
- Thrive on enabling others to deliver value