- ⚙️ Data Engineer skilled in building modular cloud-based ETL pipelines.
- 🔢 Academic background in Mathematics (BSc) & Data Science (MSc)
- 💼 Brief career in Management & Public Health Consulting
- 💻 Passionate about data architecture development, and containerized workflows.
- Specialty: Writing readable ETL modules and configuring Virtual machines with Terraform
-
Scalable and Delegable Pipelines: I believe in designing easy pipelines to operate and maintain so that teams can focus on creating new solutions and solving other challenges. This approach contrasts with the belief that only the author can maintain their code. Good pipelines require little to no author intervention after deployment.
-
Functionality before refinement: I work to get things off the ground and start with a pipeline that works—delivering value quickly—before refining it into a more polished, future-proof version. This saves time and lets me adapt designs based on real-world feedback.
-
Security and Modularity as Cornerstones: Secure and modular designs are fundamental to my work. I focus (too much sometimes) on implementing best practices like secret management, non-hardcoded paths, and modular structures to ensure pipelines are robust, compliant, and easy to maintain.
- RegEx
- JVM languages (Java & Scala)
- Go (for writing Kafka producers and multithreading)