First In Human By Vial

Episode 40: Alfredo Andere - Co-Founder & CEO at LatchBio

September 12, 2023 Alfredo Andere Season 2 Episode 40
First In Human By Vial
Episode 40: Alfredo Andere - Co-Founder & CEO at LatchBio
Show Notes Transcript Chapter Markers

Does the convergence of tech and biology hold the key to reshaping biotech's data infrastructure? Our recent chat with Alfredo Andere, Co-Founder and CEO at LatchBio, certainly supports this notion. This episode offers a deep dive into Alfredo's journey from being part of the Google Brain team to co-founding Latch Bio - a company that is making waves in the biotech industry with its innovative solutions and has raised a whopping $33 million.

First In Human is a biotech-focused podcast that interviews industry leaders and investors to learn about their journey to in-human clinical trials. Presented by Vial, a tech-enabled CRO, hosted by Simon Burns, CEO & Co-Founder. Episodes launch weekly on Tuesdays. To view the full transcript of this episode, click here.

Interested in being featured as a guest on First In Human? Please reach out to catie@vial.com.

🎧 Stay in the Loop!

For the latest news and updates, visit our website: https://vial.com

Follow us on social media for real-time insights:

Twitter: https://twitter.com/VialTrials

LinkedIn: https://www.linkedin.com/company/vialtrials

Speaker 1:

You are listening to First in Human, where we interview industry leaders and investors to learn about their journey to inhuman clinical trials presented by Vile, a tech-enabled CRO hosted by Simon Burns, ceo and co-founder For Season 2, episode 2, we connect with Alfredo Andere, co-founder and CEO at Latch Bio. Learn more about the convergence of technology and biology at Latch Bio and their quest to reshape biotech's data infrastructure.

Speaker 2:

Thank you for joining us on First in Human. Alfredo, thank you for having me. I've always loved hearing your story. You guys have an interesting team of tech and bio, really young team, really impressive what you guys have done in such a short period of time. Pick me through the journey. How did you get here? And I'd love to hear more about that. It kind of makes you tech and bio. You guys are understanding backgrounds.

Speaker 3:

I'm happy to walk you through that. I mean, any journey of myself would be incomplete without my co-founders, cal Giff and Kenny Workman. We actually went to school together. We met freshman sophomore year and we always remained friends through school. We were each into our own interests and during COVID we actually started working together on different projects not a startup, but just working together on many different projects, one project like to the next and to the next, and at some point we were like, okay, this is getting pretty serious. We've been working together for, I think at that point, maybe a year, like seven months.

Speaker 3:

What if we take this more seriously? What if we try and make a company here and try and make something that's actually really useful to a lot of people? In doing that, we realize most of the projects we've been doing up to now are probably not super valuable. But what is valuable? Why don't we go out there and figure out where we can add a lot of value? As we started talking to a lot of people about their problems and their work and learning about just every area you can imagine, one problem kept standing out to us and that was data infrastructure in biotech Going a step back right at the time that summer, I was working myself at Google on their brain team building data infrastructure and seeing the best data infrastructure in the world.

Speaker 3:

It was incredible. I had worked at Facebook before also incredible data infrastructure. What is it being used for? I mean, at the end of the day, really for optimizing advertisements, right, getting you to click on stuff you don't really want. Meanwhile, on the other hand, hennie comes from a bio background, had been in the web since he was 14, had been interning at Asimov. At the time, asimov was actually pretty great, but other biotech companies and labs and just seeing the data infrastructure they had there, these companies were trying to cure cancer. You guys recently launched battery bio taking. On many of these diseases Are disease, genetic Disease, global warming, aging the most inspiring missions you can imagine. They were transferring data around in our drives. Their data infrastructure looked like it was from 20 years ago.

Speaker 3:

When you talk to people, it was very clear there was a huge problem. We knew we had to do something about that, but we didn't know why. We went and held interviews for 200 people about why things were the way they were. We realized that this was much more massive of a problem than we even imagined. Initially, there was really no one doing anything through the quality and rigor that we thought this problem needed to be addressed with. And so we started again interviewing companies, but this time and like hey, what can we build for you that you will pay for At some point? We got six companies to pay us to build no-code interfaces for their pipelines, and that's how we started out as a no-code CRISPR pipeline company.

Speaker 3:

One thing led to another. We got these. We got these six companies kind of what they asked for. Some of them are happy, some of them are not.

Speaker 3:

But as they started using it, we realized they're making no-code pipelines, but they also want to make their own pipelines and then have no code interfaces for them and launch them in the cloud. So let's also give them that. And then we gave them that and it was like okay, they're bringing their data from somewhere else, usually stored in S3 or Google Drive or Dropbox, in a very non-traceable, non-version way. Can we make a better data storage for them to keep all their data and then feed it into the pipelines? And so we built latch data and we realized, okay, people bring in their data to latch data. Then they run it through our pipelines, either through their own custom or through our no-code pipelines, but then for the end analysis they take it back to a Jupyter notebook.

Speaker 3:

That is hard to configure, it breaks the traceability and the versioning of all the data here.

Speaker 3:

But they need more custom analysis and they're like okay, let's build that now, that component.

Speaker 3:

And lastly, it was okay, we have these components but you still need kind of a database. So they're using Notion, they're using the actually registry. Most of them are actually using CSVs hosted on Box and just discovered another problem and I was like okay, can we build one database, also hosted here, have that traceability and versioning layer, have that ability to collaborate on the same interface into the platform. And so at this point we went from a no-code interface to now being able to fully replace their cloud computing platform whether it's AWS, azure, ecp for these companies and save them 95% of the setup time and just get them going really fast. And so today we have about 60 plus paying biotech customers, over 100 academic labs using Watch for Free, 12 full-time people on the company. We've raised about $33 million from Lux Capital, co2 and others, and our usage is doubling every six to 12 weeks and there's just so much to do in this space to help these inspiring companies, and we're just continuing to build there.

Speaker 2:

I've been super impressed by the software. It's beautiful, it's well designed. I've been super impressed by the transparency you guys have changed logs. The speed you guys move really fast. Even the copy. It's kind of refreshing and light. I feel like you guys are building a modern software company and you're building a modern software company in a space that doesn't have a lot of modern software companies, Taking you through how you've thought about some of the core cultural elements that you need to put in place to do that. It's not easy. There's a reason it has to be done yet.

Speaker 3:

Thank you so much.

Speaker 3:

By the way, I think I can say the same about Vial and the new projects that you guys were launching, and so, yeah, I'm also curious to hear your thoughts, but I think for us it's really been about the people just hiring really great people and then having just these huge constraints on both the people and what each one has to do, and I think most innovation just comes from having this great team that is incredibly capable, incredibly ambitious and dedicated to a central mission, pointing them to a shared goal that everyone agrees and stands behind, and then just letting them go and giving them the freedom and the resources to get to that goal and, obviously, the pressure and the inspiration and emphasize that we really need to get to that goal for many reasons and the mission first and foremost and I am genuinely, to this day, constantly surprised just by how much gets done by people that really care when they are set towards a big, hairy goal and things that I couldn't even have imagined Amounts of effort that are heroic and just get done when you give people that freedom of resources in that direction, towards a shared goal.

Speaker 2:

I think we're both believers in Elliott Hirschberg, the century of biology. We're entering this next era and the next era critically needs infrastructure. Right, I think you guys are working clearly on that as a core thesis. Maybe tell me more about that as a core thesis, why infrastructure is so important and what do you think is going to start enabling once we have the game for built up?

Speaker 3:

Totally. And, by the way, shout out to Elliott he's awesome and, if anyone hasn't read the piece that you wrote recently about Bio and Battery Bio, super inspiring. But I'll think this infrastructure currently what we're seeing with biotechs, and the reason we knew that the problem we were solving was really large was because we weren't going to companies telling them like, hey, we have this new capability that will give you this like kind of side thing. We were going to companies and asking them how have you solved this problem? When I say this problem is like 10 years ago you kind of had like pipettes and then you had like instruments that in your wide lab could tell you the result of your experiment. Whether it's a cell counter is actually really funny, because people outside of biology think of a cell counter and they're like, oh yeah, it's like this complex instrument. But then people within biology know like a cell counter is just the thing that you click, kind of like the stadium people counter, and you're just like counting your cells through a microscope. So that's 10 years ago.

Speaker 3:

Fast forward 10 years to today and you have NGS experiment giving you 10 million data points.

Speaker 3:

You can't count 10 million of anything, right?

Speaker 3:

You need a lot of compute that is going to process it, and so this is one example where companies who are doing NGS, which is many companies these days each one was rebuilding a solution to take that data and put it into interpretable results.

Speaker 3:

And this is a part where you wonder, like why is the infrastructure not there? Where a company if we have 100 companies all doing and setting up the same thing and spending millions of engineering resources and spending lots of their time and driving that time away from their core differentiating thing, which they're the only company in the world that is able to do that, and yet they're spending 50% of their time doing DevOps, which literally every company has to do, and we build the infrastructure so that everyone can just plug into that and start doing it from day one, and I think companies that do that have been missing in biology, for clinical trials, for cloud compute and data infrastructure and in many other gaps that probably we're not even aware of today that companies are reinventing the wheel and that will see some really exciting companies come in to build that infrastructure once and give it to every company. But yeah, I'm super excited about that.

Speaker 2:

I've seen you talk about the shift from the lack of structured biological language into an era with biological language. It seems like a critical modular step to get to some version of the curse from bits to zeros and ones. Walk me through that and how you thought about that as a key metaphor and what you're doing to help build that up.

Speaker 3:

My co-founder, kenny, wrote this great line in our manifesto machine code of the biological programmer. It's very abstract and it sounds like wishy-washy, but I really think it represents a vision that's inspired us all to work on Latch. I mean, I was visiting a relatively old lab the other day, built about 10 years ago, and I saw typical cell counter the one I was telling you about. Everyone's familiar with that, just count cells. Then recently I was visiting Ginkgo, which we all know huge automation, huge high throughput. The part that stood out to me the most was one of their COVID testing facilities. It was doing tens of thousands of COVID tests per day. At its peak it was actually my first time seeing a 1,536 well-plated. It's really beautiful. It's really tiny. There's no way that a single biologist is actually filling out. It's the first time that you see that need for automation at the web lab level.

Speaker 3:

What stood out to me during the Ginkgo visit is that I did not see a single biologist holding a pipette themselves. They were mostly theorizing about experiments. They would put the plates into machines that would do all the high throughput experiments. Then they would transport those plates from one machine to the next. Sometimes not even that. Sometimes the transportation would also happen in an automated fashion. We're already seeing this, not just within companies, but within Cloud Labs, stratios and other companies that are trying to do this for everyone. They actually used to have an open Python API. They no longer have that for many reasons. They're trying to bring it back. But just imagine writing a protocol through Python, sending it to Stratios it does the whole experiment.

Speaker 3:

I believe in a future where not only the bioinformaticians but everyone in biology will be defining and executing experiments through programming Python defined protocols. This, combined with other innovations, will bring down the cost of biology by many orders of magnitude. It reminds me a lot of the impact that open source and cloud had in startups. In 1994, you had to ask investors for permission to start a company. I mean, you needed a few million dollars for a serious A to just get your servers and your engineers and your software business off the ground.

Speaker 3:

Ten years later, around 2004, mark Zuckerberg started Facebook out of his Harvard dorm room with a couple thousand dollars that his friend lent him. I dream of the similar future for biology, where a college kid has an idea for a new therapeutic modality. They spend a thousand dollars to send out the experiments to a Cloud Lab where it gets executed. By the time that student wakes up the next day, an ML model has actually iterated on the results a few hundreds of times and maybe show him that there's nothing there. But maybe, like Facebook, that kid just happened to strike luck, like Mark did, and they just found a new drug modality and then are off to the races to put it into computational mouse models before raising some money so that they can send it to a vial to run the clinical trials for them and put it into the clinic, and so I think that's a really exciting future.

Speaker 2:

Simon, yeah, I think Josh Koppelman had the line. My first company, a few million dollars. Second company it was $10,000, and by the third company I was able to do MVP. That was an X plus cost reduction in just a pretend use. It's pretty remarkable. Hopefully that happens for our field. Let's talk about some of the tech bioclipies. Using you, what are some case studies? How has your infrastructure been deployed in real life? I hear about it all the time, but I'm curious to get some great stories and case studies.

Speaker 3:

There's a lot of examples, but one of my favorite is LC Biosciences, especially because they recently had a big groundbreaking success. Their scientific journey with Lodge has been pretty iconic, but it has been turning into a more repeatable model. I mean LC focuses on leveraging DNA and RNA to combat diseases like AMS, ontotensic Alzheimer's disease. What they identified is that many RNA therapeutics were failing due to slight variations in therapeutic sequences. They began looking for antisense oligonucleotides to ASOs to target these mRNA molecules and prevent the production of problematic proteins, so offering solutions to the progressions of various diseases. They recognized the challenge in the RNA therapeutics field, where it was slight atomic changes in these therapeutic sequences and impact efficiency, and employed these ultra-high throughput screening of oligonucleotides with the vision that if they screened enough of them, they could enhance the potency, reduce the toxicity and optimize the delivery. They were employing obviously high throughput to do this. They had bioinformatics to design all legal libraries and relied on NGS next generation sequences to test the gene knockdown in whatever disease models they were using. This process generated a lot of data and led to a lot of delays where their bioinformaticians were bottlenecked. Sometimes they took weeks to get the results back to them and the scientists didn't have instant access to that data that they were generating. They came to us and they wanted to overcome this bottleneck in data processing by integrating large bio and enabling their scientists to easily access that data and run that bioinformatics pipelines themselves. They did very successfully and this facilitated the library design. It accelerated their barcode analysis many times over and it made machine learning actually their machine learning model successful to all their scientists.

Speaker 3:

This sounds pretty biased coming from me but you can ask the CSO, dylan, and he will rave about it himself. He has told us that the faster design, effects and execution of experiments has led from two to four weeks to one to two days turnaround time for one of their computational experiments. 80% reduction in NGS analysis costs From $2,000 that it used to cost between the contracting and the compute to $200. Now their bioinformaticians can go on and focus on more pressing and challenging analyses that they trained a whole PhD to do, instead of just running data for the scientists. All of this led to a huge acceleration of their core R&D milestones. They were able to screen more alligos faster. A few weeks ago, lcbio recently announced a partnership with JSK, who will be harnessing their oligonucleotide discovery platform, which part of it lives on latch to uncover new therapies with JSK's data. By using latch they were able to streamline that discovery and they continue to pave the way for RNA therapeutics. We're super excited about our partnership with them and continue to do a lot of work for them.

Speaker 2:

We'd love to talk about some of the challenges you face, some of the key lessons learned along the way of building a tech biocompany. I'm curious sometimes it comes on execution risks, sometimes it comes on strategy breakdowns some of the key lessons you've learned across the two of us.

Speaker 3:

There's so many, as I'm sure you know, but one that's really stood out recently because of recent successes that we've been having in that area, is learning to align our final goal, our final metric, our most important metric, to our user's success. A failure point that we actually had around this in the past was that we were measuring revenue as the credits that we both saw for context on Lodge. The way that works is you buy certain credits think of an arcade or like Snowflake and AWS you buy certain credits and then you go into the platform and you can use those credits either to run workflows and analyze data, to storing data, cost credits and some other features. We were focused on selling credits and it was actually going relatively well. It was kind of like sporadic and not too predictable, but it was kind of growing. But it got to the point where we were selling kind of lots of credits and we were like on track to hit a large number that we were kind of looking for. But it was not translating to credits use. It was not translating to time spent on the platform. Most importantly, it was not translating to scientific insights from our customers into the platform. This was a huge problem. It felt nice to sell a lot of credits, but if people weren't using them then it meant our product wasn't actually solving the problem. We were just charismatic and good at selling.

Speaker 3:

In what turned out to be a very painful at the time decision, we took this large goal that we had and we were like okay, from now on, revenue is still our final North Star, but we only count revenue after a customer has actually spent the credit, not when they initially bought the credit. This was really painful because imagine if our usage was about a 20th of the amount that at the time we were selling. That meant our revenue went down like 20X and everyone was kind of a bit disappointed and like okay, damn, our numbers are not as good as we thought. But it doesn't matter that it's small, it is very small, but we're just going to focus on doubling it every six weeks. Our sprints are six weeks currently. We don't care about 18 weeks, we don't care about 12 weeks, we don't care about a year, just for the next six weeks.

Speaker 3:

This small number has to be double that small number. It has hugely aligned our incentive with customers. It has made people in the company deeper than ever into people's and customer science. We now understand our top 10 customer science so well that we're in the room helping them with their bioinformatics. We're brainstorming with them on what they can try next. We're building really nitty-gritty bioinformatics stuff for them and we're building a product that genuinely solves their problem, because if they don't use it, we get no reward. Over the last 30 weeks, our credits have actually been doubling every six weeks, which with exponentials. That means that three weeks ago our goal actually surpassed our old revenue goal and it's continuing to 1.5x to 2x every six weeks in a really repeatable and healthy way. That's been a huge learning recently that we've been reiterating and retro-ing on because it was a big one for us.

Speaker 2:

I love that. That's awesome. Let's talk about five years out Every day on Twitter. It feels like a new AI distribution model for biology here, some new breakthrough there. The paste only is cooking in lab automation. You talked about really going from design into implementation mode. Where are we going five years out in tech bio and what gets you most excited?

Speaker 3:

I think there's two branches here. In terms of Latch, five years south, it's very clear for me we're going to replace AWS, azure and DCP for biology. When a biologist or a bioinformatician thinks of the cloud in five years from now, they will think of Latch, not of AWS, and that, in our minds, will save biotechs 98% of the time that they can instead focus into their differentiating science. But so that's kind of the vision for Latch itself and where I see us going. But I want to talk a bit more about the larger vision for the tech biospace, because I think that's really interesting. And I do want to preface with saying that I think AI is currently overhyped and it might crash soon. It might not, but it might crash soon because there's a lot of money going towards it. But I'm more so thinking in your timelines here, or five-year timelines many cycles down the road.

Speaker 3:

I believe the future of biology will be a high throughput, irrational drug design and I guess for context, previously, like 1950s to 1980s, we had Pfizer, merck and others identifying targets and then screening them against thousands or maybe tens of thousands of natural candidates. I think Merck would famously pay for a part of their employees' trips if they brought back a dirt sample from wherever they went. They would give them these special vials and they would kind of like ask them to bring these samples because then they could screen those new stuff they found in the dirt against these targets. And then around the 1990s came Regeneron, came Burntex, and they came with their crazy idea at the time to make drug design rational, like, hey, let's use techniques such as crystallography, such as genetics, as NMR, and let's design the molecule around the target instead of doing this high throughput screening. And I mean that has worked massively. Burntex is now a top 20 pharma, regeneron 2, and many other companies and that's been going on for the past 30 years and just massive success.

Speaker 3:

Now I believe there's a new shit happening with all these new trends pointing towards millions of data points generated through highly multiplexed biological experiments and interpreted through trillion parameters, general function approximators.

Speaker 3:

I believe that this is what the future of discovering new therapeutics and new biological modalities will be. I used to think that the Holy Grail of Biology would be able to simulate an organism right Like a mouse or a human, down to the atomic accuracy of each cell, and then testing compounds against that through, like obviously virtually through a perfect simulation, but now I don't think we will ever get there. I think the equivalent of that will be to teach an AI to create a compression of our high throughput data through a model that is not human interpretable but that we can ask questions to, such as testing thousands or millions of compounds against a specific target and then answering which one we should take to the clinic, and it will, almost every time, work automatically. That's where I see the future of biology going. I think this is far off, and today we have a lot of data problems to solve first before we can make that happen, but it's a pretty exciting future.

Speaker 2:

Can you agree with you more Well with that, Alfredo, huge fan of what you guys are up to. Thanks again for taking the time. Likewise, Thank you so much for the invite.

Speaker 1:

Thanks for listening.

Reshaping Biotech's Data Infrastructure
The Future of Infrastructure for Biology
Tech Biocompany Challenges and Future Vision
Appreciation and Gratitude