Open Positions: full stack | data engineering | machine learning
Locations: Amsterdam, Vienna, Berlin
At Aiconic we move, merge and build machine learning systems using billions of rows and Terabytes of data reliably and reproducibly. We work on hard problems where the right choice matters, as people rely on our algorithms to make their most important decisions.
We care more about the ability to learn fast than experience with certain libraries or systems. Our engineering culture values the academic method and good code.
We offer you to work with interesting data on problems that matter, a friendly working environment with interesting and diligent coworkers, and the opportunity to shape the future slowly but surely.
We look for a track record of exceptionalism - think hackathon winners, competitive schools, published papers, Kaggle masters or extraordinary projects.
- Pre-Screening (Resume + Motivation)
- Machine Learning / Data / Coding Challenge
- Review of something you built/created (already)
- Interview with the founders
- One week paid on-site with the team
Full Stack Engineer
You do the Backend, the Frontend, you are the Architect and DevOps. You are the beating heart and decision maker of our architecture, design and tech stack. You can learn a new language in a few days and you have nightmares about non-pep8 code. You are not afraid to build, test and iterate quickly and you feel strongly about good code. You will make decisions on technology and architecture, you will help data engineering and machine learning to produce the best possible results.
- BS in Computer Science or Evidence of extraordinary aptitude as a software engineer
- The ability to learn quickly and make decisions under constraints
- Strong Python skills
- Experience with basic machine learning concepts
- COBOL programming knowledge
- Knowledge of Kubernetes and Kubernetes apps
You swim through Data. You can write a program in SQL (not that you'd want to) and you treat data architecture like the holy grail, because it is. You care deeply about reproducible, scalable pipelines and clean data sets. You will build systems that move data from A to B, reliably, under all sorts of constraints and heavy fire, to the places where it matters the most.
- BS in Computer Science or Evidence of extraordinary aptitude as a data engineer [5+ years of experience as a data engineer]
- The ability to write a memory efficient program
- Deep expertise in data storage, movement, and management systems
- Knowledge of distributed systems like hdfs/hadoop, horizontal database scaling
- Strong understanding of SQL
- Ability to work with ETL pipelines (i.e. how good are you are moving data A->B under all sorts of constraints)
- Excellent scripting and bash/python skills
- MS/PHD in computer science or similar
- Experience with airflow, luigi or similar ETL pipeline software
- Experience with aws, azure or google cloud
- Experience with legacy database systems like oracle, db2
Machine Learning Research Engineer
You live and breathe creative problem solving. You're a Scientist with an engineering heart. You care deeply about experimental setup, the scientific method, focus on results and real world impact. You will build machine learning models on billions of rows of data and help create APIs to serve predictions to make key decisions where it matters. You will attempt to generalize every day problems to global patterns, and collaborate on ongoing research efforts.
- MS in ML, Physics, Mathematics, Statistics, CS or evidence of extraordinary quantitative aptitude
- Oversight to consider the engineering that comes before you, and the product you're algorithm lives in
- Ability to work with messy real world data and the desire to create real world impact
- PhD. or published papers at top research conferences
- Track record of building competitive algorithms (e.g. Kaggle)
- Thoroughly documented and presentable project case demonstrating aptitude