Stef Van Grieken - Cradle



Stef van Grieken is the co-founder and CEO of the biotech company Cradle.

Based between Delft and Zurich, Cradle’s machine learning platform helps scientists to design and program ‘cell factories’ faster and more successfully, speeding up the development of a more environmentally friendly method of producing almost anything.

They have recently raised a 6 million dollar seed round co-led by Index Ventures and Kindred Capital.

Prior to Cradle, Stef spent 7 years with Google as a Senior product manager for Google AI known as Google Brain and Google X known for its moonshots projects.

During this episode we discuss about the founding story of Cradle, how you can program cells to produce milk, plastic and petrol-free chemicals among other things, and why Europeans should change from being definite pessimists to definite optimists and be more entrepreneurial. Let’s do it guys.

Stef is one of those rare breeds of entrepreneurs with a high amount of raw intelligence paired with the same amount of kindness.

Brought to you by


Rows -  the spreadsheet where data comes to life. Connected with your business data and delightful to share, rows is how teams work with numbers and share their results.


Athletic Greens - all-in-one nutritional insurance, AG covers my bases with vitamins, minerals, and whole-food-sourced micronutrients that support gut health and the immune system.

Mentions during the episode

Input from some of the investors in Cradle

Sofia Dolfe - Partner at Index Ventures

AI powered Protein design has seen striking development in recent years with Cradle placing itself at the center of the conversation. By providing bio-engineers with tools to accelerate the design process, Steph and the entire Cradle team have both the vision and the technical and commercial abilities to drive meaningful advancements in this space, and we're incredibly excited to partner with them.

Leila Rastegar Zegna - General Partner at Kindred Capital

Stef really struck us as one of those incredibly unique founders who has the courage to undertake something incredibly audacious, but also the genius to make it happen. We love founders who have so much raw curiosity and raw capacity that they may have grown up in one discipline. So in Stef's case, sort of the world of bits, but have become so fascinated with another world, in this case, the world of atoms that they really steep themselves and immerse themselves in it.

Transcript - edited for clarity

Calin: How did you assess the risk of starting a startup?

Stef: I've always tried to do more risky projects, even within my work at Google, typically at an early stage or sort of started something new. Of course there is a higher probability of a startup failing but then you just go do another one is what they taught me in San Francisco.

You get paid less and that kind of thing, but I'm not super money driven but I think I'm probably still employable after this if it fails.

Calin: Tell me more about your time at Google. What type of projects did you work with?

Stef: So I started my career at Google and worked there for seven years. Super grateful for having had the opportunity, initially as a software engineer.

Initially I worked on a team called, Social Impact First we were doing things like elections information in Google search, like who can you vote on? Where are the voting locations? Those types of things. And then another set of features that I hope you've never seen, it's for the crisis response, area of search, and maps. And so let's say there would be a massive snowstorm in Stockholm right now, you would get sort of alerts a sense of like where you could go to, to be safe.

I realized a lot of people at Google are just incredibly good at writing code and I'm a little sloppy to be honest. After a year and a half I decided to go into product management and ended up in a team, that's called location Insights. So they're doing a lot of work based on where all the phones go. You figure out where traffic is and how busy is in that area, those types of things.

That team was sort of between Android and Maps so I stayed with Android team to work on Android for the car.

It was super hot to put Android in all the things like watches, TVs and that kind of thing but cars were maybe not the most obvious wearable but a super fun project. Android is made to have apps and be captivating but obviously in a car you don't wanna watch YouTube videos cuz that's just super unsafe.

I did that for a while. I think when that was in the market, I was just less excited. A theme in my career has been going from zero to one.

Then I went to Google X, and spent a couple of years there. Meanwhile I was working in maps, we were using some early like machine learning type of approaches to make the analyses like how busy is the traffic, that kind of thing and always kept in touch with the people that were working there.

At the time it became pretty apparent that the chips needed to train these very big models was one of the biggest limitations. The data was there, but the capacity to train and serve them wasn't there. So I ended up building a team together with my engineering co-founder in Google X that made a chip that does really well at machine learning workloads.

That team was ultimately not spun out, but Google wanted to retain that technology. So in my last job at Google, I was on the product leadership team of Google Research and machine intelligence, responsible for all the hardware, the compilers and things that we were using to train and serve machine learning models.

Then also started a couple of more, sort of applied research projects when I sort of fell in love with biology.

Transformers and large language models were a thing three years ago. They were more in the research phase and obviously they're super popular. I think my mom now knows what a large language model is but most people were applying them to retrieval, search, translation or ranking ads. But there were a few people that were applying them in very different domains like Biology. And sort of that's what caught my curiosity in this domain.

And I think I'm still in my honeymoon phase with biology for sure.

Calin: Google X is known for their moonshots projects, how would you guys decide on which moonshots to focus on?

Stef: We were using a Venn diagram with three bubbles.

  1. One was, is the problem large enough? that would usually be defined to your point by, can it impact more than a billion people? That was sort of the low bar.
  2. Then is there a breakthrough technology? That was defined by, is there some technological insight that's non-trivial where you can get a lot better than what's currently out there.
  3. And then the third one was, is the problem big enough? they didn't wanna do anything that's incremental.

Example with driving, removing the driver from a car is a pretty significant step instead of sort of iterating your way towards something else. And so that sort of thinking of being very audacious. And then it was really sort of a portfolio approach. What they called, the early stage pipeline that would be 30 to maybe a hundred projects and the most crazy stuff, people would try.

The chances were very high that you wouldn't make it. And they were time bound. You had nine months to showcase otherwise kill it. And you were trained to be intellectually honest and kill things if they didn't work out.

And then there were sort of three stages, like a venture model where your team could grow a little bit until you were big enough to spin out, which could either be to become your own alphabet thing, or go back to Google.

For example, some of the early work on ML, like Google Brain actually started out in X but then became a Google project. Obviously some things like Waymo and the self-driving car and some other projects maybe got too big and should have been killed earlier or spun out.

Calin: You've been working with ML models at scale fairly early. How fast can ML models learn on different types of data, and then be actually practical?

Stef: I think that's a great question. It honestly depends on the use case. To give you a couple of examples:

  1. in driving you wanna be very good, like driving 10% correct it's not very good but even like driving 99.999999999% correct is still not very good, right? And so humans are actually not that bad at driving if they don't use any substances.
  2. for something like translation, 97% is already pretty good.
  3. for something like search, maybe 80% is already pretty good.
  4. Some of the discussions around chatGPT I find interesting. OpenAI actually published a blog where they said it's like 30% truthful. That's not very truthful.

How fresh is the information? How truthful is it? It really depends on the type of use case and this is something people often underestimate. Typically I would say compare whatever the model can do, the task that the model can do with how good the human is at it and then see if you can get either at par or better.

As an angel investor I would invest in businesses where there is clearly a human element that is beaten. It turns out making robots grab things is really hard. Like our motor functions are super complicated things and it looks very obvious to us that you can just grab an apple.

But it turns out that's an unsolved problem for machine learning today. There's dozens of techniques in architectures and other things, but you know, you just have to go look at papers if you're interested in that.

Calin Fabri: What memories do you have from your time at Google or Alphabet that impacted the way you build products today?

Stef van Grieken: For me it was quite an experience as a a Dutch guy, I wasn't very good at college. I think I got lucky getting into Google.

Then suddenly you were in Silicon Valley where people were optimistic, Elon Musk was proposing rockets should do back flips in the atmosphere, that kind of thing.

Something the Dutch would never, ever do. And suddenly you're in this company that isn't a consulting or the typical sort of job that you would do if you were in the Netherlands at the time.

There were a couple of things that really stood out.

  1. I think the first thing is really built incredibly high quality teams and I think that stayed with me throughout my entire career. Ultimately make or break is mostly the humans you work with. I think we put too much emphasis on ideas and too little on execution, do you have the humans to do it.
  2. The second one is, I would say focus on the user, people tend to get lost in the details and don't even ask users. That's actually one of the reasons why I left Google. I felt it was getting so big at a certain point where processes had taken over. Most of my job was reviews and getting people to flip bits and give me approvals. I would ask VPs like:
  3. “Have you talked to a user in the last quarter?”


    I don't like that.

  4. The third thing, and it sort of leads back to my intro, the sort of audaciousness especially of, folks in the United States is really amazing. And you know, where in Europe I think we're sort of somewhat definite pessimists. If would have a large group of executives that would come to Silicon Valley and they would ask me to do a talk and I would ask them.
  5. “Would you invest in a rocket that does back flips?”

    everybody's like, “No, no, no, no, of course not. that sounds like a ridiculous idea.”

    and I think in Silicon Valley especially that culture is very much of like:

    “huh, that's interesting. Why are you even proposing that?”

    Like there's a certain curiosity and audaciousness where people get to try hard things. That was very counterintuitive to me that that is actually, a very positive thing.

Those were I think three things that really stood out for me. I think as it grew right now, it is getting very big and, as a result, there's a lot more, is a very sort of communal culture where a lot of people are involved in decisions and I think that is not scaling very well and it's just making things a lot slower.

Calin: Tell us the founding story of Cradle

Stef: To set that context a bit more. The Google Brain team is sort of the research team within Google that sort of applies ML to all kinds of problems. I got in touch with one of my co-founders, Eli, who was working on a team that was applying that to DNA.

I was like,

“Why would you want to do that? That doesn't seem like a Google product?”

But he got me super excited cuz you can think of DNA as sort of this alien programming language that we found. But we have no way of writing to it. We don't have compilers or a formal language we can use to make it do what we want.

It's highly complex. All of the chemistry that you can do in your body is just phenomenal and that's just literally strings of sugars. And he was like

“You know, this is really much better than some of the tools that these people are using today.”

And just to give you an example, if you're a really good team in biology and you're making changes to DNA to make it do what you want, if you get it right 1% of the time, then you're very good. And then It takes like four weeks to test whether your thing actually works.

So the equivalent in computer programming would be, here's a blob of binary, You don't get a compiler, go flip some bits, and then hit run. And then four weeks from now we're gonna tell you whether you did something correctly or not. The way that they solve that is by trying a hundred programs.

That's sort of the TLDR of what these people are doing. Very dedicated, lots of awesome science, but that's a really hard problem. So i got super fascinated by that. I started reading books, a pro tip for some of the people in your audience, the best question I think you can ask people that you interview in a new space is

“Which books in undergrad actually mattered?”

It's usually two or three and just go read those and you have a pretty good introduction into a new field. I mean it takes a couple of afternoons, but it's fine. I reached out and talked with 100 biologists.

It was kind of funny, because this is how I met my co-founder, Elise. One of the Dutch newspapers covered her and apparently she was in Joshua Tree in California, climbing a mountain. She got a text from a friend that said:

“Some dude from Google wants to talk to you about using AI to revolutionise biology.”

And her first response was:

“Arrogant bastard.”

Anyway, it all worked out in the end. 😂

I basically learned three things after those interviews:

  1. One, it is a really hard problem to make biology do what you want.
  2. Second, the software that is built for biotech people is not great. And I've been a tool builder throughout my career, the tools are just not very good. It’s like making an app in the nineties, good luck with that. You had to call IBM, get a mainframe. It was lots of inhibition to get to something that actually was sort of easy to build by a small team.
  3. And then I think the third thing, is I just realized the sort of magnitude of the opportunity. If you look at the broader picture, most of the inputs we use for our society today are animals, oil, and then ores and things from the ground, like rare metals, that kind of thing. But like these first two categories, anything that's biological, nature has found the most energy efficient way of doing that. McKinsey actually calculated it like 60% of our current economic inputs you could make with biology instead of petrochemicals or animals. The statistics are just wild.

Anyway, I got excited as you can hear, dug in a little bit more and a little over a year and a half ago, we started Cradle to allow for science teams to make programming biology a little bit easier for them.

Calin: what is your advice for someone raising a seed round now?

Stef: Yeah, I mean obviously the market has changed from a couple years ago, I think for the better, to be honest. Valuations were just bizarre. I think one of the things I learned from our process is that actually starting with the right team is probably the most important thing that people can do.

I see a lot of people prematurely start raising money without having figured out their plan and figured out their team. I think in our case, we did figure out the team. The plan was still a bit shaky, I must confess.

A lot of these VCs are panicking a bit right now because all their crypto funds are down and it's fine to take a little bit of dilution early on. The actual valuation doesn't really matter that much, it's a bridge loan into the future. Don't optimize for valuation, just optimize for working with the right partner. We really tried to spend a lot of time trying to figure out if we like these humans.

Like marriage, we're gonna be hanging out with them for quite a while, so we have some really awesome investors that we really love. That's what I would recommend. Other than that, it's a great time to build. A lot of good companies get built in these moments of scarcity, where suddenly, gravity returns to markets. It's not just about growth and that kind of thing, but actual unit economics matter.

I much prefer that world where a product is only done when it can sustain itself. A lot of financial engineering is cool and it can drive valuations up, but I don't actually think it's in the long term interest of the product either.

Calin: how did your perspective of being an angel investor helped you during the fundraise?

Stef: simplify, simplify. Don't assume VCs actually know something. Especially in deep tech. I think a lot of founders fall in love with their technology and they think it's obvious to everybody. You really need to bring it back to sort of fundamentals.

Explain why your team is capable of executing this, explain what the actual problem is. Don't be a Technology in search of a problem.

I would recommend any founder to become an angel investor on angel list just for a month. You don't even have to put actual money into things but just look at all the decks in your space that's just gonna be fantastic education to make a better deck.

Calin: If Steph from 10 years ago would listen to this episode, what would you tell him?

Stef: I was brought up in the Dutch society that's pretty risk avoidant. Like there's actually a saying in Dutch culture:

“ don't stick your head above the hay field because it will be cut off.”

… and I think especially in Europe, we are way too risk averse.

And I was very risk averse initially as well. It's actually my wife who said, you should not work for McKinsey. You should just go do something fun and I'll move and whatever. One person in the Bay Area described it really nicely. Europeans are definite pessimists.

Like Europe has awesome people. There's so much safety. You know what happens in the US If your startup fails? You get no money, absolutely nothing. There's no social net whatsoever, nothing. Right here in Switzerland, when I stopped working for Google I got a call:

“We noticed you don't have a job anymore. Do you need money?”

“ no, I'm fine. I'm in between things.”

Everything in Europe is pretty cool. The only thing is nobody's trying something. And I think it's actually an opportunity that will hopefully come up as sort of more examples of cool startups that are being built here.

We write all the papers, we have all the awesome universities. There's nothing wrong in that sense with Europe, but it's just a lot of people that have this very well set out career path in this bank or the likes.

Like, oh my god.

Be a definite optimist, try things. I think of all the strategies that people have tried to teach me, the one I was definitely not expecting, and the one that I sort of yielded the best results for me. And it makes sense, if you work on something that's hard, it attracts much better people.

Good people have done cool things before and they wanna do something that's more ambitious. It has a much higher impact on the world. So you can be proud of yourself. And then the other cool thing, typically there's less competition. You can do another FinTech SaaS for SaaS for SaaS thing, it's like, I think we have enough APIs for money people. I agree. I agree. Love

So everybody should go do an undergrad in biology and go build some bugs!