Skip to main content

Jasper AI's Dave Rogenmoser & Saad Ansari on Growing & Maintaining an LLM-Based Company

In this week's episode, Lukas interviews Dave Rogenmoser (CEO & Co-Founder) and Saad Ansari (Director of AI) of Jasper AI.
Created on February 16|Last edited on February 17


About this Episode

In this episode of Gradient Dissent, Lukas interviews Dave Rogenmoser (CEO & Co-Founder) and Saad Ansari (Director of AI) of Jasper AI, a generative AI company with a focus on text generation for content like blog posts, marketing copy, and more.
Lukas talks with Dave and Saad about how Jasper AI was able to sell the capabilities of large language models as a product so successfully, and how they are able to continually improve their product and take advantage of steps forward in the AI industry at large.
They also talk about how they keep their business ahead of the competition, where they put their focus in terms of R&D, and how they are able to keep the insights they've learned over the years relevant at all times as their company grows in employee count and company value.
Other topics include the potential use of generative AI in domains it hasn't necessarily seen yet, as well as the impact that community and user feedback play on the constant tweaking and tuning processes that machine learning models go through.

Connect with Dave & Saad:

Find Dave on Twitter and LinkedIn.
Find Saad on LinkedIn.

Listen



Timestamps

Transcript

Intro

Dave:
People just don’t have an understanding or a grasp of what is happening. Fundamentally, what are these models trying to do, and how do they respond to certain things? There’s just not anything anyone’s ever had experience with before. Coming in, we’re not just teaching them how to use our product, we’re trying to teach them, “Fundamentally, here’s even what AI-generated content means.”
Lukas:
You're listening to Gradient Dissent, a show about machine learning in the real world. I'm your host, Lukas Biewald. This is an interview with Dave Rogenmoser, the CEO of Jasper AI, and Saad Ansari, the head of AI at Jasper AI. Jasper is one of the most exciting breakout successes in text generation right now, and a pioneer in using prompt engineering to build successful business. This is a really interesting interview both about entrepreneurship and applied machine learning, and also technical details around large language models, and the future of how prompt engineering will work. I learned a lot from this interview and I hope you enjoy it.

LLMs as a Commercial Product

Lukas:
Well, why don't we start a little differently. I was thinking this is what I would need as a ML researcher, which is mostly our audience. Could you explain how a marketer would use Jasper and what they would get out of it? And maybe even get concrete about what people love so much about it? Because you mentioned that people have a real palpable excitement about using the product. Why is that happening?
Dave:
Yeah, marketers have a lot of content to create, and most of them would create infinite amounts of it. Nobody ever has enough blog posts, nobody has enough...I mean, at some level you probably have enough to add creative. But, you run a bunch of tests, and all of a sudden a week later everybody sits down and they write all this stuff. They test it, and a week later they're out of things to test, and they go six months without ever testing another headline again for their Facebook ad. The only way to do this has been just through manpower, and just trying to hire more people, and dedicate more time to it. With marketing, it's such a thing that a little bit better headline can be the difference between successfully and profitably spending $100 million on ads, or spending $3 million and having to shut the whole thing down. A lot of this is pretty thin margins between a whole campaign working really well and it never getting off the ground at all. Yeah, there's just so much at stake and so much value to add there. Marketers think highly of themselves, like, "My writing's different," and all of those things. And it is, for a lot of them, but I think when they saw Jasper, the fact that it was pretty good — in some cases better than them — and it could do it in an instant...it freed them up to go from the marketer that has to stare at the blank page and do it themselves, to now a little bit more of the managing editor. Like, "Jasper will give me all the raw materials here, and he'll give me the first draft, and I can pick and choose, and assemble, and all that stuff." It kind of moves everyone up a level there. So, that's what marketers use Jasper for. Some of it's just high-volume stuff, they just need to create a lot of it. Some of it is creating great content that's made better than you would already create. All of it is marketers trying to just get an edge, and just get out a little better content, more content faster.
Lukas:
You were one of the first companies to really make commercial use of large language models. Could you talk about what that process looks like? How do you make the large language model into something that people are actually willing to pay real money for?
Dave:
I think the first thing is you've got to know a customer base, and know it really deeply. That's, I think, always been our secret. I just know the customer base super deeply. I am the customer, I've sold a lot of stuff to them before, they're my friends, they're our community. I'm so in it and I'm always looking for ways to make their lives easier, make my own life easier. I think a lot of people just aren't connected to any sort of end user in any meaningful way. I remember, I joined the OpenAI Slack community back in the day, and the day I got the credentials, I was like, "I am Thanos here." I'm like, "This is ultimate power." And I get in there, and I'm the only one talking about building something that people would want. Literally the only one. There's 1,000 people in there, and they're all translating the Declaration of Independence into Elvish, and then back out into album art. I'm just like, "This is cool, but literally no one's talking about letting regular people use this stuff?" It was confusing to me, still maybe perhaps confusing to me. I think there's just a huge market for taking this stuff and making it useful and solving some specific problem in some way. But I think, that's where it started for us. Just like, "Hey, we made this tool." I knew deeply what we wanted, and I felt like I knew the outputs that would get customers excited and that would get me excited about. Early days, it was just playing around. I didn't know anything about prompting. I'd love to see my first prompts, they were probably just nothing. I just dove in for a week, and just started really crafting these things one-by-one, and messing around with it all. I'm sure Saad has replaced everything I've done at this point now. The early days, it was just a lot of testing out. I think everyone was learning what to do there. But really, I just kept coming back to the customer. I care about all the settings and all the prompting — all that is just a byproduct of, "We've got this problem that needs solving. And if this tool can solve it, then let's try it."
Lukas:
Is the problem somehow more specific than just, "Write me some content on this topic"? How do you think about that? Again, I'm not a marketer, so walk me through what they're thinking. Or even, could you point me to a specific marketer that works with you, and tell me how they think about the content that they get?
Dave:
Let's use blog posts, probably our most popular use case. You search for something on Google, there's only going to be 10 results that pop up on the first page. So, your blog post that you write has to be better than quite literally millions of blog posts that could surface and be somewhat there. You got to be in the top 10, but then even past that, you really got to be the top 2. There is a pretty big fall off even as you scroll past the top 2 or 3 there. It's not just about getting a blog post out — which is maybe what earlier models could do — it's about getting a really good blog post out. And it's really got to win in this marketplace of ideas. It's got to be compelling, it's got to be engaging, it's got to be factual, it's got to be helpful, it's got to be written by somebody that deeply knows the audience. Just clicking "compose" on some large language model, it's not going to be enough. It's not going to be enough to win, particularly if other people have that. Our customers, they want really high-quality content. I always challenge them, I'm like, "Don't even bother writing low-quality content. This isn't some article spinner." One, I just don't feel good about building that kind of product. But two, it's just not going to work for marketers. It's just not going to get clicks, it's not going to rank on Google, it's not going to get people excited when they come to your landing page, if it's just filler stuff. So, from the beginning we've always said, "Hey, we're here for high-quality content." To the degree that we can help people produce that, we will. That's going to be a big part of our focus, as opposed to just an article spinner that just spins out tons and tons of stuff. It's just not going to stand up and actually produce the ROI that marketers want.
Lukas:
My experience of just working with GPT-3 is, it's an impressive product for sure, but I don't think I get what I would consider high-quality blog posts out of it when I just mess around with it. Can you talk about how you actually got it to deliver high-quality content? Is there a human in the loop here that's tweaking it, or what's the process like?
Dave:
Yeah. Well, you certainly — even in Jasper — can't just go and click a button and get a high-quality blog post out now. We really talk a lot about it with our customers, like, "Hey, it's a dance. Jasper's there to help give you..." I might start with blog post titles or blog-post topics. "Hey Jasper, give me 10 blog post ideas around this topic. Okay, that's a pretty good one." And that helps me start off in a better spot. "Jasper, give me 10 titles about that topic. Okay, cool, that's pretty cool. Jasper, give me an outline that I can start to work off of here. That first one stinks. Give me three more outlines." You're basically going back and forth with Jasper to help assemble it. If you don't know what a good blog post is, you're going to be in trouble. If you don't really know what your reader wants, you're going to be in trouble. Because Jasper's not going to know all that. But what I think Jasper does do a great job of is, if you're able to help piece that together, you can assemble a really great blog post. Some of it'll be used, some of it'll be steering the output there, but it's just using a variety of different tools to do that. You can get some really, really, really high-quality content that's really remarkable and that readers really want, but it is going to take a human in the loop doing that. So, that's happening there. And then our team in the background is testing out all different models that produce blog posts best. There's all different prompts that produce different types of blog posts. Should we have one tool that creates a general blog post? Or are there actually five types of blog posts and we need five different models, each one a little bit more specific to a listicle, or informational blog post, or whatever? We're trying to do all of that behind the scenes and simplify that for the user, and just turn it into this magical experience that they can just show up and start getting to work. Our goal is that the software will become invisible.

The Impact of Prompt Engineering

Lukas:
You were the first person doing the prompt engineering, is that right, on day one of the product?
Dave:
Yeah, it was me. It was me.
Lukas:
What did you learn? Teach me how to do prompt engineering. What are the first couple things that you figured out when you were just messing around with it?
Dave:
Oh man. I mean, I just didn't know anything. First, I tried treating it like these instruct models where it's like, "Write a blog post for me." And it's just like, "Write a blog post for me, write a blog post for me, write a blog post for me, write a blog post for me." Okay, cool. What's happening here? There's patterns, and it's trying to figure out what I want, and all of that. I think really early days — and we still get a lot of benefit from this — is the examples that we would give it, for few-shot outputs, really were important. I felt like a lot of our competitors...we're marketers, so we're just probably sticking — again, I don't really know what any of them are doing — but we're just sticking whatever decent examples in there. And I had a really high bar. I was able to use examples that I knew for a fact converted really well on Facebook, I knew for a fact performed really well there. We just always used stuff that was proven in the market to start to steer and give examples there. So we'd get really, I'd say, highly opinionated, but really high quality outputs out of it there. But yeah, it was just me reading every doc I could. There weren't that many docs back then. And just talking to everybody and like, "What's top-p mean?", and all of these things. I just had no idea. But I knew the output I was trying to get to, and I wouldn't stop until it was like, "Man, that is really good and I would really use that. I think that's probably what's harder, I suppose, than figuring out what top-p really does.
Lukas:
What does top-p do, what is that?
Dave:
Oh gosh, I was just hoping you weren't going to ask me. We got people here now, we got Saad that can do all that stuff. But if you're listening, let my ignorance on top-p go to show you where I think the real value lies here. It's outside...or, in addition to knowing what all the little things do, it's like, "What is it useful for? And where does this really play a role in society?"
Lukas:
Well, when you think about hiring a prompt engineer, what do you look for? Is it domain expertise in marketing and content? What else would you ask me if I showed up interviewing for being a prompt engineer?
Dave:
For us, I look first and foremost for an understanding of the customer. Or at least a willingness and desire to understand the customer, and an empathy for the customer. I really do want...we have people that apply that really want to do AI. "I really want to do AI. I want to be at a cutting-edge company. Generative AI is so cool," and all that. That's fine, but if it's just that, I think you're going to struggle here. I want it to be, "I really want to use AI. I love the customer base. I can see the problem, I can articulate the problem that we're working on here. And man, AI happens to be a really great fit there, and I'm excited to find what else really helps solve that problem." That's really what I'm looking for, as opposed to this, "I just want to do AI." When I think about a prompt engineer, we got away for a long time without anyone that really knew AI, I would say. We were technical, and we could hack it, and we were fine-tuning models, and we were still doing some fancy stuff, but it wasn't like we were doing anything that would blow anyone's minds. But it still just worked. Going back to that, I think generating prompts and working on that — starting with a deep understanding of the customer — will get you so, so far.
Lukas:
I think you're the poster child for prompt engineering. And certainly some people think that machine learning goes away — or takes a backseat in the sense of training models — and there's this new role of prompt engineer that uses these models for some purpose. My background's in machine learning, but I'm open-minded to different paths that the industry could take. Do you think that in your world, machine learning, technical ability even matters at all? Do you try to hire machine learning people themselves?
Dave:
I think it does matter. I think a lot of this stuff — over long enough time horizon — gets commoditized, and it matters...I don't know if "less". The thing that used to matter, technically, are probably solved much more now. We've got a really strong internal AI team that's full of really smart, what I would call, "AI people". Saad's putting together an awesome team there. We want to have a ton of those people that can just help us, can develop moats and develop IP, and again, solve customers' problems in a deep way. I think it does matter, I think we'll always have that. I want to have a 500-person team full of ML engineers and AI people doing all of that. But I don't want to be driven by that. That's always a byproduct of the thing that we're trying to do. If we can solve customers' problems using all that stuff, then we're so much better off for it and it definitely gives us an advantage. Saad, do you have any thoughts on there you want to add in?
Saad:
No, I totally agree. Essentially what we're asking is, the customer wants something and the customer has a vision or view, or maybe they're trying to discover a new idea. There's this ideal output out there somewhere that would make them happy, and delight them, and give them a lot of value. Between their input and then the ideal output for them, there's choosing the AI system. It's not necessarily one model, it could be a number of models. I think that's where the AI team plays a role. Like, "What is the right system? What is the right base?" I think you're right as well, prompt engineering's going to play a huge role. As Dave said...Dave's a perfect prompt engineer, and is somebody loves the customer, and is willing to iterate through n-number of cycles to find the right prompt. Just an interesting point there as well, there's this idea of expertise. An expert is somebody who's learned something and knows it from experience. I think one of the really interesting and fun things about a lot of these models is that even the people who made them aren't experts on them. It often happens, our R&D center will come out and give us a model, and we have to test it, and they'll tell us all these things about the model. Within a few minutes of us testing it, we've already falsified a lot of those assumptions. Nobody's an expert in prompt engineering. It just takes a love of the end use, and customer, and the product, and then just willing to be patient with it.

Quantifying Model Improvements & GPT-3

Lukas:
I guess one of the things that seems like it might be hard — putting myself in your shoes — is actually quantifying if your models are improving over time. Is there some way that you know even, Dave, that as you iterate, that they're better, besides just eyeballing the content that's getting produced?
Dave:
Yeah, early days it was eyeballing. Sometimes we'd — in Slack — just try to pop in two screenshots, "Hey everybody, vote on one of these. Which one do you think is better?" But again, all this would happen in my head a lot, where I would just keep cycling until I could feel it getting better. Just my own expertise there, it was nothing scientific. And even a lot of stuff we didn't test. Once I found the right setting on something, I probably wouldn't even test around and find the optimal one, it was just locally pretty good. Then I would release it to customers, and I think anecdotally they'd share feedback whether stuff was getting better or worse. We would track things, "Are they favoriting it? Are they copying it to their clipboard?" That was a signal that was even maybe stronger than favoriting it. And then, are they using it, other places in the product? We started to track those as real signals there. It is funny, especially early days — this probably happens now — but a lot of people's perception can really sway a whole community. We'd have somebody complain about a template, and they'd say, "Something changed in the last five hours, it's totally different, it's way worse. Please revert it. Dave, this is getting worse." You had a bunch, "Yeah, set the change unchanged [?]." To the best of my knowledge, nothing had changed, nobody was touching that, nothing shipped, nothing happened. But people just pile on that as a way to highlight their frustrations with it. And the same thing reversed, people would be like, "Hey, anybody notice that the paragraph generator's way better now?" Again, nothing had happened since before...all those people piling on like, "It's so much better, I love it. Thank you for all you guys do." I think companies can get a lot of mileage out of frequent improvements and having a culture of improving, because everyone just always assumes everything is improving all over the place. You get the benefit of the doubt over things you haven't even touched. Like, "Man, everything's just getting so much better all the time," because they've come to see that that's a general theme in our company. If we slow down and stop, again, I think it atrophies. The customer trust corrodes, and they start thinking more stuff is worse than it really is there. It is hard to quantify, but a lot of it is just customers sometimes just feeling like things are getting better, and they're being heard, and they're seeing improvements there. They'll give you a ton of benefit of the doubt from that.
Lukas:
Do you feel like you benefit from improvements in GPT-3? I've heard different things from other people. Some people seem to feel like GPT-3 will launch a new thing and it'll break all the prompts. Other people tell me that it's actually much better than it used to be. What's your experience?
Dave:
Yeah, I think we get benefits. Even I tell our team, "Let's do a lot of our own stuff and have our own IP, but if OpenAI is going to just do all this free work, and then just push it to some API endpoint, and then now you've got all this new functionality that takes us 20 minutes to test it and implement it, let's use that." Let's always be sure that we're testing all this new stuff — that they've got 200 people building for us — and not just rely on our own stuff. It's hit-or-miss. Not everything that they roll out is better. We A/B test pretty much every new model and update that they make. We'll fine-tune our own models, and it's definitely hit-or-miss whether those are even better. You got some of the best people in the world working on this, and they'll be super excited about some model, we'll test it and be like, "This is actually performing worse across the board. We're not going to roll this thing out there." It is interesting how that all works, but we definitely try to use all the stuff that they release.
Lukas:
Do you find fine-tuning useful? Again, some people say that prompt engineering makes fine-tuning obsolete. What's your opinion right now? It's November 7th, 2022.
Dave:
Yeah, this will be obsolete November 8. Generally, we find it helpful. I think a lot of what I worry about in the space or try to allocate...it's allocating resources correctly here. Where you're not doing all of this work that then just gets obsolete out of the blue. It's like, "Oh, we just spent all this time fine-tuning all these models. And oh, this new model makes fine-tuning obsolete." That is so much of what we're trying to do, just figure out like, "Where's our space, where's our special secret sauce that we can go and implement there?" We've got different fine-tuned models running, but even as I say that, I'm thinking like, "Man, I don't know when the last time we tested just a new model that might very well be better than an old fine-tuned model." This stuff changes so fast. I think a lot of what we think through is just being sure we're always going back through the whole system, and updating with new stuff, and testing it with new stuff.

Alternative LLMs, the Limitations of LLMs, & Image Gen Models

Lukas:
Another question that's probably not going to age well — but I'm curious your current take, as far as you can tell me — is, how do you think about the different LLMs out there? You're famously using GPT-3, but I'm sure you've tried BLOOM and maybe other ones. What do you think about all the LLMs out there?
Dave:
We use GPT-3 primarily but not exclusively. We've had other stuff going on and we're always testing new stuff there. It's funny. I think I get down...this feels almost like an Android/Apple conversation, where you get down into the weeds, and people that really are in the know are like, "Oh yeah, GPT-3, not even top 5 anymore." I've heard people say stuff like that, "No, this one's better, this one's better, this one's better." I just don't see that bubbling up to being a more user-friendly model or really doing things. I still feel like GPT-3, from my perspective, is generally far better than a lot that's out there. I don't doubt that. We've seen this ourselves too, there's things that can do specific things better. But I think by and large, GPT-3 still reigns supreme in my mind, and most people producing high-quality content are using that primarily. That's what I'd say. I don't know, Saad, what do you think? Or even Lukas, what are your thoughts? Have you seen that...I'm always asking people this question, "What else is out there? What are you seeing?" I think a lot of people that just [?] kinda really know, they're like, "GPT-3's pretty dang good."
Saad:
Yeah, this is almost like a puzzle of three black boxes. You have the black box of, "What does the customer want?" The black box of, "What do the models do?" And then the black box of everything in the middle. And like, what are we going to do about all that stuff? I think customers want different things for different use cases. For blog posts, maybe they want to have something that's more semantically complex, the language is richer. If they're doing product descriptions, maybe they want to make sure the facts are preserved, and is more domain-specific, and that is able to speak to and sustain the data and the specifications that was in their initial product descriptions. What we're finding is different models have trade-offs. Typically, when you increase in semantic complexity, the same processes which get you that also get you to break down facts. It gets better at representing what looks like facts, but it might actually be lying way better. And so you can't even tell it's representing facts in a false way. Some of these models are a little bit more literal, but they're not able to be very semantically complex or speak like Shakespeare. GPT-3 is great, but it wasn't trained on a lot of foreign languages, whereas BLOOM was. I think ultimately, it's about, "What does the customer want for that specific use case?", and then, "What is the best way to get there? What are the approaches and processes?" It's not necessarily one model all the time. Maybe it can be a combination of models, like an adapter model with a base model. And then how do you tie together those initial models? It's almost like a menu of options to get the best output for the highest efficiency. I think it's more of puzzle pieces, you'll always have these three variables you're dealing with.
Lukas:
Well, sure, but at this moment is there another model that you generally use? Or is there a number two that you would point to, other than GPT-3?
Saad:
T5's really interesting for some of its instruct capabilities and its ability to be fine-tuned for very specific things. And indeed, you could say there's a hypothesis that a really good AI architecture will always have two things. A really generalized model that's powerful in probably semantics, which is one half of the thing about language. And then a secondary model is either really good at instructional, like specific instruct, or at some sort of fact adaptation. Because one model will become slightly more complex at the cost of sustaining facts, whereas another model can preserve facts at the cost of not being the most semantically complex one. This is a hypothesis. We think we'll probably end up finding a lot of these different pairs to get the best of both worlds.
Lukas:
Interesting. Maybe overall — at this moment, November 7th, 2022 — do you have a characterization in your mind of what large language models can do and can't do? Where do you feel the limit is? Is there a type of content that you feel like you couldn't create well — given the current state of things — that you might be able to do in 2023 or 2024?
Dave:
I can bubble up a few big customer complaints. Perhaps the biggest one is factuality. It'd be one thing if they just always had incorrect facts, and then you see a fact and you go correct it, but it lulls you to sleep. Because it's like "Man, it knows a lot." All of a sudden you start trusting everything, and then you don't want to look up every fact, because we've seen four that are totally right, but then it'll just say the opposite. I remember one time I was asking who won the 2021 Super Bowl. I forget what it was. Basically it was the right teams, the right score, the right location, the right date — I looked all that up — but it actually switched the teams. The team that lost, it said won. It just lulls you to sleep, because it all looks pretty good and probably passed the sniff test. And then you realize, "Oh man, I just shipped the exact wrong answer." That's just a big thing that we've got to figure out how to control for, how to identify. How to be just a bit more truthful, I suppose. Another one is just, obviously, getting it to follow instructions. It tends to repeat itself. If it hasn't quite picked up the pattern or the instruction, it just thinks you want it to say the same thing over and over again. If our customers set that pattern — let's say you do it twice — and then you keep trying to write...now it's going to do it even more, you just reinforced this. It spirals outwards, where it's like, "Well, why is it always saying the same thing over and over?" It's because you set the pattern at the beginning that makes it do that. Or you misspell one word in an intro paragraph of a blog post, and then you realize that Jasper misspells it the entire way. You're just like, "What is happening here? It's a common word." It's like, "Well, you kind of misspelled it, and so Jasper thinks that's how you want the word to be spelled." There's definitely a lot of just steering content and teaching people. I also think people just don't have an understanding or a grasp of what is happening. Fundamentally, what are these models trying to do, and how do they respond to certain things? There's just not anything anyone's ever had experience with before. Coming in, we're not just teaching them how to use our product, we're trying to teach them, "Fundamentally, here's even what AI-generated content means. Here's the limitations of these kinds of models, and here's what they're really good at." We're trying to teach all of them that in a very simple way. Saad, have you seen anything that you feel like we can't do?
Saad:
Yeah. Just the way we evaluate and think about models is, you have a X/Y axis, you have semantic complexity, and then you have domain fit, which has a lot of different features like factuality. Then you have a bunch of additional capabilities like multilingualism and so on. We also pay a lot of attention to instruct, can the customers get what they want out of it? In terms of semantic complexity, I think it can probably end up doing everything, at the end of the day. There was this article about "Does Moore's law apply to generative AI?" I think it does, but not really. It applies for semantic complexity really well, but unlike Moore's law, which continues on forever, there's diminishing returns. Eventually you'll start hitting an asymptote and level off, because what does it mean to become infinitely good at semantics or language? Humans have a limit, you can't really beat that. What would it even mean, for a customer to go above that? I think it'll actually get really good for semantics. I think there's a lot of limit around domain fit and factuality. There's actually an aspect of it that worries me a little bit. I would really be worried if anybody using a generative model started using it to get advice, legal advice or medical advice. It speaks to what Dave was saying about factuality. It's actually getting better at lying. It looks like it's more factual. Like if you ask it for legal advice, some of these models can cite legal papers and even come up with fake court cases, but it's totally made up. It's actually not just a limitation, but almost a risk. I think our community is really wise to not use it for that, but you'll see this get better and worse at the same time, it'll get better at representing citations in a really strange way. I think for domain fit and factuality, we actually have the perfect tool for factuality. We've always had it for decades, it's copy and paste. The question is, if we want to increase factuality, are we able to bring in a database that has the facts that the user cares about? And then have those stupid models... I call it stupid factual...you have stupid factuality, smart factuality, and false factuality. Can we replicate stupid factuality with one model, and then have another model be semantically complex, and bring the best of both to the user? I think the limitations are around factuality, as Dave said. I think everything else, though...in a way, it's not that the sky's the limit, but the users are the limit. We'll able to accomplish what a lot of humans can accomplish in [?] language and also semantics.
Lukas:
When you say "semantic complexity", can you give me an example of what you mean? What would be a very semantically complex thing to say?
Saad:
Yeah, let's do two examples. One being a Tweet and then one being the longest form possible. A Tweet, if you say a sentence like, "The dog barked." Or let's just say you say a sentence like, "The dog barks." And you say, "The dog likes Jurassic Park." And you say the dog likes Jurassic Bark." The last one is a pun. For the model to know, "Hey, write me a funny joke about a dog," it would have to know that "bark" and "dog" is related, it would have to know that "Jurassic" Park was a movie and then you can replace "park" with "bark". There's a lot of semantic complexity going into that sentence. You're getting higher density of meaning within shorter tokens or word count. Whereas the first sentence is, it's almost the same word length, but it's less complex. Semantic complexity is the ability for it to have different layers of meaning within a given space. In terms of the longest form, think about a play or something by Lin-Manuel Miranda. You have questions of plot, where you're getting the end of the play to refer to something in the beginning of a play, or different paragraphs referring to each other. If you imagine the words being linked to each other, it's like you have more links between words and between paragraph. That's semantic complexity, it's more dimensions to it. Insofar as these LLMs, large language models, are predicting the next word in a string of tokens, you can see why it's hard for them to accomplish this. But at the same time, why they mathematically can end up doing so.
Lukas:
It's interesting, I feel like I've spent a little more time with DALL·E maybe because my daughter loves DALL·E. I feel like there, we have such basic problems. We try to get it to draw the mom with black hair instead of blonde hair, and it drives my daughter nuts, and actually my wife nuts too. That just seems like such basic semantic understanding of a set. It'll often take a different person in the scene that we're trying to describe, and give them black hair instead of the mom. I'm curious, do you think there's something different about image generation? Because it doesn't seem like it has very much understanding of what I'm asking, at least in that domain.
Dave:
I think image generation is interesting. Obviously it being so visual and instant, that it's really easy to synthesize the whole thing in half a second. Where I think if you had Jasper write a blog post, it's like, "Is this a good blog post? Is it what I wanted?" It's going to take me two minutes to figure that out, do all of that. There's something...I'm sure there's a lot of weird stuff happening. And obviously, text generation's been around longer than image generation. This image generation will probably be super easy and awesome in a year or 72 hours. I'm sure there's weird stuff happening that's just harder to see. It's harder to see that, "Oh, it gave the wrong hair color to the wrong person, or it gave the wrong conclusion to this thing that I thought it did there." It also seems like image generation, you could say, "Don't paint this car pink." What's it going to do? It's going to paint the car pink because it doesn't know that "don't" and "pink" are tied there. I think image generation prompting still feels like much more dumb than text generation. I assume it's just the state of the technology being earlier, as opposed to maybe something being more complex there, but I could be wrong.

Success & Staying Ahead, Staying on Top of Gen AI

Lukas:
Interesting. I'm curious — and I'm not here to grill you at all on your business model — but I feel like I have to ask. You made this awesome business, it sounds like, in a few weeks of effort at first, and it just took off. How do you think about defending your business? Don't you worry that someone might come along with a similar approach? Or maybe they find something that's a little bit better, somehow, in such a fast-moving space? How do you stay ahead of that?
Dave:
Well, I don't think we made an awesome business in a few weeks. I think we made a crappy MVP for how to do Facebook ads in a few weeks. And then I've spent every day since then building out all the other parts of a scalable, repeatable business. But no, it's a super valid question. I think I've spent a lot of time just thinking about moats over the last 18 months. What's real, what's not real. I'm looking at the B2B companies, like "Where are really moats?" Obviously you've got...people think moats, they think maybe NetworkFX, or you think Uber going into a city, or you think Amazon having warehouses everywhere. Things are so structurally obvious, but then you also take maybe HubSpot or maybe Adobe, and it's like, "What's the moat there?" It's like, "I don't know. They knew a customer, and they built a good team, and they had good culture, and they maybe got a little lucky and they kept executing over and over, and they had a second product, a third product, and a fourth product." I think in B2B, that's probably far more common than this Amazon example. Where you just end up building a good company that can continue to execute at a fast pace, and knowing the customer deeply. I think you've got moats like brand, you've got moats like community, you've got distribution. We want to have all of those, but we also want to keep developing strong product and tech moats too. I think at some level that means we've got to have a continually improving product, and we've got...something where you end up having so much product built. Maybe none of it's hard to build, it just would be hard to build all of it. And by the time you built all of it, if you're a competitor, I'd be gone too. But I think when it comes to our AI, yeah, there's a ton of different differentiation, just around the models that you use. We want to be really nimble. We're always building in such a way that we can replace everything wholesale very, very quickly. I think a lot of companies maybe are going to get stuck on some old model or some old way of doing things, and that's going to be the death of them. We also realize that we've got a really unique dataset that our customers are giving us. We're seeing how they use it, and they're generating all sorts of content that nobody else in the world has. To the degree that we can use that to go and make models better, any model — OpenAI's, this new one that comes out, the new one that comes out tomorrow, whatever — we would be able to take that dataset and very, very quickly fine-tune and train those models to be great for our customers. It may not be great for anybody else's customers, it may not be great for any other use cases, but it's what our customers want, and we've got a good inside track there. All that being said, I think moats are something to be worked towards. I think there's a lot of pieces. I don't think Jasper, or almost any other company in B2B will live and die on one perfect moat. It'll be a combination of six, seven, eight different things that make it hard to do it all together.
Lukas:
From a technical perspective, are there things that you do to stay on top of the generative AI space broadly?
Dave:
I'd be curious to hear about you, Saad, but I see a lot of it on Twitter, to be honest. Where do you go for breaking information every 10 minutes? It's Twitter. By the time it makes it into some newsletter roundup, that stuff's obsolete now. A lot of it's just finding and curating a good Twitter list of people that are just in the know and all of that. Conversations with other founders like yourself or other stuff like that, I hear a ton there. It feels the only way to stay up to date is to really get all the way in, because no one's going to curate it and spoon-feed it to you. And by the time they do, you'll have missed it. What do you think, Saad?
Saad:
Yeah. Before I started at Jasper, I called up one of my mentors who was running a bunch of R&D laboratories and research processes. I was like, "How do you become a successful R&D leader?" He was like, "Well, you're probably never going to beat everybody else in the world at everything, because you have the whole world and they're all researching and coming up with the best stuff." He's like, "Definitely stay on top of that, but make that a small percentage of your focus. Find the one thing that you can be the best in the world at." As Dave said, we want to be the most customer-obsessed company. We want to understand what we can do with customer data. It'd be great if a customer could say like, "I thought it and Jasper got it." They go on from their idea to something that's in their hands — some content created — in the fastest, easiest way possible. I think that that customer data, being able to take the best of the world's R&D and saying, "Hey, we're coming up with this new model that can be fine-tuned in this many ways. You have these new prompting techniques, you have these new base models or methods to hook on adapter models to get better outputs. If you want to take all of that..." The one big thing we want to do is find a way to use our customer data to get more customer fit, I think. And that's a big deal. Like I said, models are either going to get semantically more complex or they're better at domain fit. I think that's almost the whole second axis, that we can be the best in the world at.
Lukas:
From a hiring perspective, do you actually try to hire experts on generative AI? Is that even possible? Would anyone pass that bar at this point?
Saad:
It just goes back to this word of what does expertise mean? The paper on Transformers came out in 2017. We've gotten tons of amazing applicants who have a lot of AI experience. I think that's the question. Is there an expert in generative AI? We're all learning these things together. Even the R&D centers — these world-class folks that come up with a model — they don't even know what the model can really do until they test it. I mean, it really is a black box. I don't think that this idea of super explainable AI can apply to this field super quickly. So, what does it mean to be an expert? I think what we're looking for is people who are obsessed with customers, who are fantastic problem solvers, who are creative and able to navigate this uniquely interdisciplinary space. You have to be really good at the AI and the data science. You also have to be really good at language and you have to love the customer. And it's a pretty rare mix. It's like the book by Walter Isaacson, "Innovators". The people who are good at the art, good at the science, and they have the customer obsession. I think those are probably the three right ingredients. We've been lucky to get some really great candidates along that line. I think it makes the space uniquely interesting. It's definitely not boring.
Dave:
We want to have a pretty diverse set of experiences there, because you just got to be tapped in broadly. I mean, I said this earlier, but I think it's worth saying again. I have never worked particularly well with people that come in an interview or something, say, "I really want to use this technology. I really want to use Terraform." Or whatever it is, I don't know, it's just never worked out. It tends to be, "Hey, we got this hammer, we're just walking around looking for nails all the time," instead of just realizing, "Oh my gosh, a screwdriver would've done it so much simpler, so much faster there." We tend to shy away from folks that just want to do some cool technology. But if they get excited about that as a way to solve a problem, then that's huge.
Saad:
That's actually a super good point, Dave. We've been talking a lot about fine-tuning. When people imagine fine-tuning, they think about the most complicated things first. I don't know if this is a trade secret or something, but there's actually so many simple things you can do to get major uplift. That scrappiness, it goes a long way.
Lukas:
Wait, give me an example. You can't just say there's so many things. What's the first thing you would do?
Saad:
For example, even with prompting...right now, when you think of prompting, maybe you think about a user putting in a prompt. Or maybe you think about some backend prompting that a template has, and then a user interacts with that, and then it sends it off. I mean, just a simple thing too. Like, if you have a store of what you can call context — a series of pieces of information that the customer cares about. It could be their voice. It could be a list of their products. It could be their customer's voice. It could be an example of a customer review that somebody left them and was really good. It could be a speech that maybe one of the leaders gave — you could actually just concat up various pieces of context, and then have a prompt, and then get a really cool... If you think of the generative model as a remixer, "Remix these various pieces of context and give you something new," it actually works really well. It's nothing super fancy, you're not fine-tuning the model, you're just doing really clever prompting. I've been showing our business dev guy, our business pod leader, and he's been really impressed with it. It's just these little hacks, there's thousands of these things.
Dave:
And even to simplify it...this is Dave-level. Let's say I wanted to make our paragraph generator 10% better, maybe measured by...it gets copied to the clipboard 10% more often. There's obviously a bunch of ways to do that. One way that somebody that really wants to do AI, would probably come to it and they'd say, "Oh, we got this T5 thing and we're going to spin up our own infrastructure, and all of this stuff." Probably take a month and a half, but we'll get this thing and we'll fine-tune it on our past customer data. Yada, yada, you do all that, and whatever, it could probably work. You could also...for us, we got the stilt in the way that non-technical people can do it. I could go adjust the temperature from 0.7 to 0.4, and we might find that, "Holy cow, that actually produces way better paragraphs and nobody had ever even thought to test it." And that takes six minutes. Now the customer gets the 10% improvement either way. Do they care about the T5 version, think that's so cool, and so amazing, and awesome? No, they're just like, "10%? I'll just take whatever one you give me, I'm trying to write this blog post so I can get home to my kid's baseball game." We're always pushing ourselves to be like, "What we care about is the lift. What are all the ways we could do that? Let's start with the simplest one first, opposed to just playing start up or playing AI, or doing just whatever new cool white paper came out yesterday."
Saad:
Yeah. And just to refine that even a little bit more, it's actually really surprising sometimes too. We did this experiment where we actually, we had two models and we actually thought...let's just say model A and model B. We thought model A was going to win because it was better, and more semantically complex, and all that type of stuff, but our customers liked model B better. We thought about why. B was a little bit more wordy, more flowery. If you're an English teacher, you wouldn't have liked that, you'd like model A, but our customers really liked model B way better. It dawned upon us that the customers are using the content outputs much like a sculptor looks at a big rock. They're actually trying to get something that's easy for them to delete from, rather than something perfect that they want to add to. I think the models are complicated, it's really interesting, but the customer is also very interesting, and we don't fully understand exactly what they want and what they like either. Being able to focus on that using these hacks is the way to just understand the customer better, faster too.

R&D Budgeting & Focus

Lukas:
When you think about your R&D budget today, is it more 90% prompt engineering and 10% machine learning and fine-tuning and this fancy stuff? Or is it more 90% fancy stuff and 10% prompt engineering? How would you describe where you put your investments right now?
Saad:
I do view them tied together. For us, it hasn't been such a big divide. We'll get the new model or we'll come up with new adapters for a situation, and we then have a black box. The question is, "Will this black box be better for our customers, the same, or worse?" We definitely put it into an A/B experimentation situation and we start running it through numbers of tasks. These tests can involve different prompts, it can involve different configurations. We have a bunch of internal metrics we run against too. Everything really just represents our hypotheses where we think the customer will like more. Will they like more varied sentence structures, longer stuff? Will they like something that's more on topic, or so on? I think it's all a part of the toolkit, and the right percentage is the one that results in the biggest uplift fastest. I'm sure it'll always be moving around. Prompt engineering plays a huge role right now, but we know that a lot of customers have asked for more domain specificity. So, that's an area of research where a lot of our R&D budget is going to, but once you overcome this initial hump, then maybe we'll go back to prompt engineering again.
Dave:
Yeah, it's probably less prompt engineering, as a percentage, than you'd probably think. Less than 50%, maybe a lot less. Yeah, I don't know. I think there's definitely some diminishing returns, where you play around with that stuff, and you get some great gains, and you keep trying, and you can't really get anything else to really breakthrough there. [?] things here get you pretty far pretty fast, but at some point you've got to do a lot of the outside stuff to just keep getting big improvements.
Saad:
Yeah.

Tracking Insights as Your Company Grows & Remote vs. Onsite

Lukas:
Is it hard, as you scale, to institutionalize the things that you're learning? I picture your company running all these different experiments, do all these different things to make customers happy, but does every new hire have to come in and learn on their own all these things? How do you keep track of all these insights that you're having?
Dave:
That is a challenge. So much of what I try to do all day...just for context, we've gone from 10 people at the beginning of the year to 150 people now.
Lukas:
Wow, that's incredible.
Dave:
It's a ton of just...you're trying to find somebody that knows what happened three months ago. You ask 10 people and you can't find anyone that was even here. I think that's been a lot of the work, just trying to give people context over and over. Try to point people to past things that were done or past experiments that worked. Luckily, I mean, Slack is a pretty good record for a lot of that. We've got this channel that's a "shipped" channel. Anytime something gets shipped to customers, it would go in there. You learn a lot just by scrolling back through that and seeing all the different winning A/B tests, we always published that. We'll even try to write that in a way that's customer-facing, just to fully wrap your head around like, "What are we trying to do here?" Put it in a way that the customer would appreciate. Don't just talk about latency, talk about like, "Man, now our customers can generate 18% faster," and yada, yada, yada. I think a lot of it's just [?] call people back to the past, what we've done, and aggregate that in the best ways that we can. But we have not found a really easy way...we don't have a super cool training course on all the insights that we've had. I think we're getting better at trying to aggregate those and make sure the right people have them.
Saad:
I know how you feel about this, Dave. I think remoteness has a lot of benefits and has a lot of challenges as well. I think a lot of folks pre-COVID are used to just getting into a team room, and having a sprint, and you learn through that sprint. It's just different in a remote setting. I think just the world is getting used to how do you apprentice in a remote setting? We definitely try to simulate that with offsites and then getting to meet the team. I'm not saying it's a challenge, but it's definitely something we're learning about as we go.
Dave:
Yeah. You guys fully remote, Lukas?
Lukas:
We're basically fully remote. We do have a headquarters in San Francisco, but our meetings are generally remote-first, and we'll hire people in any geography.
Dave:
Yeah, I definitely think going from 10 to 150 would be easier in-person, but that doesn't mean that the fullness of the company...I think that there's outsized benefits — over a long enough time horizon — to being remote. But it definitely feels like this early forming phase, trying to get knowledge in the right spot. It's tough remote, I think. The initial team was all in the same room. I think it'd be very hard to find product market fit and have the year we did out of the gate, if we were all just remote at that time.
Lukas:
Yeah, on my end I've appreciated some of the discipline that remote work forces you to do. I think we write a lot more down and you keep better agendas, and records, and things like that. For me it hasn't been all bad, but I think there's so many different ways to run a company. And I think different teams even, some prefer to do lots of onsites and some don't care at all.
Dave:
I do love that it forces you to just think more clearly, communicate more clearly, plan ahead so you're not just always putting out fires throughout the day. I really think it does do some really good stuff there.

Thoughts on Weights & Biases

Lukas:
I'm curious...we don't intentionally try to invite only customers on the podcast but I think in the end we typically are talking to people that are customers, usually. And you guys actually aren't a customer of Weights & Biases. I wonder, if you were in my shoes, would you be worried? Do you feel like there's this big trend happening that undercuts what Weights & Biases does? I think, Saad, you're probably more familiar with what we do, so maybe I'll let you answer that question. And I promise I won't be offended with any direction you want to take that.
Saad:
First of all, Weights & Biases is a great company, a lot of friends are there now. And just thank you for inviting us, even though we're not customers. Correct me if I'm wrong, but I feel like Weights & Biases, it increases in value as the customer has more and more models. It is essentially a thing that scales in value as the number of model scales, is that right?
Lukas:
I think that's what customers tell us. Yeah, for sure.
Saad:
The jury's still out in terms of how this generative AI space will shape up. But I can see some companies developing in this space that are mono-model companies. They just have their one model and they're specialized in that one model and its use cases. So obviously for that, that'd be a challenge. I could see other companies though, that they have maybe a few big base models. These things are pretty huge, you don't want to be fine-tuning a 100-billion parameter model all the time unnecessarily. Maybe these companies have two, three, four bigger models, but they have tons of adapter models or they have some small models just for different things. I can see Weights & Biases being ultra useful for that. I think overall the answer to your question is no, it'll still continue to be really useful. I think how people think about scaling models and when and how is it viable, that might change. I think we're still learning what the best architectures are that'll sustain in the space.

Generative AI in Other Domains

Lukas:
Okay. Well, usually with the nerds that we have on this show, we end with two questions. I might slightly modify it for you guys because I have a slightly different version than I want to ask you. Typically, we ask if you had more time to research a different topic in machine learning, what would that be? But I want to ask you all, if you had a different domain that you think these models might apply phenomenally well to — where there's no one like you who's come in with that customer empathy — what do you think is ripe for disruption, with these generative models?
Dave:
I think about this lot, and I think we've seen...we just did our big Series A announcement, there's probably just an army of clones gaining strength in the corners of the universe right now that will all pop up in the next two months. They'll be just very much like Jasper, and even probably good products and all that. That's a bad way to do it. You don't want to compete against us at our game. We have to be really good at our game, but we have one of a million games. What I would encourage people to do is...this could be done anywhere, and you could take this and put it in any subset of any industry. You could do legal stuff, you could do stuff for doctors, you could do stuff for different teams and companies. If you think about CRMs now, CRMs over 20 years...I've got a friend that just started a cleaning company, a local cleaning company, and he was like, "It was so easy to start, there's this CRM that just does everything for you." I was like, "Well, was it HubSpot or Salesforce?" He's like, "No." It's some rando thing I've never heard of in my life, that's a big company that's just a CRM for cleaning companies. That's where this goes to. You take your end user, you understand them deeply, you take all the noise, you simplify it for them, and you give them a product that just does what they're trying to do, better. Anyone could beat Jasper by going deeper in any little vertical that we're not fully focused on there. Anyone could do better at something that's a little bit more specific. This is true, I guess, of models too. You could get a better model if it was just more specific there. That'd be my encouragement to people, maybe just the community at large. This is all so cool, but there's so many people that would be thrilled to use this technology, if you would just package it up for them. I think the key is to not try and be the next Jasper or do exactly what we do. It's like, take the essence of it and then go for a different customer segment. There's so much opportunity out there right now that's just completely untouched and nobody is trying to build for. You go find a community and do that, you'll be in really good shape.
Lukas:
But do you mean making marketing copy for lawyers, or do you mean doing some specific other thing for lawyers when you say that, just as an example?
Dave:
I mean, it could be any. I don't think marketing copy as much. I'm thinking...well, one, I would just want to talk to lawyers, like, "Hey, can you tell me about what you write all day?" That's where I would start, since I'm not a lawyer. But I'm guessing it's a lot of explaining in short form to customers over email what that document means. You can just build a quick summarizer that just hooks into Gmail and spits that out really fast. It could be you training up paralegals to understand more stuff, so you build a little tool that helps them synthesize documents, and explain it to just train up people internally. It could be generating some boiler-plate content. Maybe I talked to a lawyer and I say, "Hey, show me how you put together a document." Maybe it's a lot of them going to Google and searching for boiler-plate stuff and adding it in. Maybe it's similar to how engineers write code, a lot of it's "Watch them go to Stack Overflow and copy and paste," and now Codex or whatever's framed all that up. Again, I don't know because I haven't spent the time in there, but I think there's probably a ton of opportunity that I would never even think of. That if you just spend time with a group of people, you'd see a lot of opportunity to do a lot of things, because you'd be like, "Oh my gosh, this model from two years ago does that out of the box. We'll just spin that up for you." That's more of what I'm talking, than thinking, "Oh, marketing copy for this niche there."
Saad:
One quick thought here — and it's just a funny point — I try to read all of Jasper's churn notes. Like, who's leaving Jasper and why. There's a funny demographic of students who use Jasper. Maybe they're using it to do their homework. I'm not sure, exactly. I think some of them are. That combined with another insight of...the education sector is one of the hardest sectors. It's so hard to innovate in education, especially technology-wise. This is less that I think is a good idea, and it's more of just sending good vibes to whoever tries to use generative AI in education. Students seem to be using generative AI to do different things for their assignments and homework. I think it'd be really interesting if somebody did a TaeKwonDo move, and took some of the capabilities of generative AI that would make a student want to use it...maybe for something that's like cheating, like they're doing their English essay on a generative AI app, but actually combining that with pedagogy so they're actually still learning through that. Super hard to do these sort of startups, but wouldn't it be cool if you could have procedurally generated lessons for students? Just using this and making it really fun. It's a hard space. I'm not saying it's going to be the next unicorn or anything like that, but if somebody can succeed at using this in education...it's just impressive, if somebody pulled that off.

Unexpected Challenges, Community Impact, & ARR Stats

Lukas:
Well, final question, we usually ask what's the hard part about making machine learning models work in the real world. But maybe for you, I'm just curious more broadly; making this company, making this product that people appreciate so much, what's something unexpectedly challenging about making it work?
Saad:
I think it just goes back to this insight that people are more complicated than AI. What their UX preferences are, how they want to use it, how to simplify it for them...solving the black box of AI is maybe tens of thousands of permutations. But solving the right design for the human, there's infinite amounts of permutations for that. This is new for everybody, and we are still learning how people use this. Maybe it's obvious to Dave and everybody else, but I was really surprised to realize that customers want a lot of text, then they want to delete it, and find the gem inside of that boulder. I just don't write like that, but it was fascinating to see that's how our customers do. There's probably a thousand other things we learn about people and how they use generative AI. It's infinite, what we can learn about people and how they want to use this. It's hard, but inspiring at the same time.
Dave:
I was thinking, something that's hard...this is a step off of the actual tech, but it's the community aspect of it. Arguably, our community has been one of our biggest advantages. We've got customers with tattoos, and we're just all riffing in there, and it's just a lot of fun. But it's like, "Man, it's exhausting." I've had the whole community turn on me. All of a sudden Instagram's blowing up, people are pissed, people canceling and leaving. It's like, I'm the bad guy, I've got to go in there and save it. It's emotionally challenging to be connected to people in such a meaningful, or powerful, or exciting kind of way. It's work. There's times that I really have to get hyped up to like, "Oh, I'm going to go and do this." But to me, it just feels like table stakes. If you're not willing to do that, it's okay. Maybe find a customer base that you would be willing to do that with. Because it's such a valuable part of the game of just building a company and building great products. If you find yourself avoiding or not wanting to spend time with your customers, then it's just going to be so hard to do the rest of it there. Communities are a lot of fun and high stakes. So fun when they're going really well, so miserable when they're not going well. But it's really, really worth it. The benefits are immense.
Lukas:
You know what's funny, Dave? The stock entrepreneurship advice that I always give people — and I haven't heard anyone else say it — it's exactly what you said. Which is, find a customer that you like because you have to spend so much time with them, and you'll just be so much happier. It's funny-
Dave:
Totally.
Lukas:
-I think both of our companies, we've had the identical approach of just approaching a really specific customer type with a clean sheet of paper of like, "Hey, what do you need?" It's interesting to learn that from this interview. And I mean, congrats on such a vibrant community and such a well-loved product, that's so cool. Do you want to brag about any stats? I mean, everyone says Jasper's one of the fastest growing companies of all time. Is there any numbers that you make public that you'd want to tell the world?
Dave:
I think what got public was year one. By the end of the year we got to $35 million ARR. And we only had nine people, I think, at the end of the year that were doing that.
Lukas:
Pretty good. Don't tell my investors about that, please.
Dave:
Yeah, totally. No, it was pretty good. It was a mix of luck, and a mix of right time, right place, and the right team. That's one of those early eye-popping things. I think two months into it, we added a little over $3 million of ARR in a three-day period.
Lukas:
Oh my god.
Dave:
We launched this new product. I was just hyperventilating. Like "Oh my god, I cannot believe this. I spent my whole life failing." And you finally hit one. No, I mean, I think I am just as shocked and grateful to be a part of this as the next person. It's been a wild ride, and we're trying to just stay humble and stay focused. For the first year, we didn't hire anybody. We didn't have any meetings, we didn't do any investor calls. It was just so simple, almost this Garden of Eden of startups. It was just us hanging out with our customers all the time, and trying to build some stuff that they wanted, and that really worked. As we scale, we're trying to just keep that ethos and infuse that throughout the rest of the company, because there's something nice about simplicity — and something essential about simplicity — as you scale.

Outro

Lukas:
Absolutely. Well, thanks so much. Super fun interview. I really appreciate it.
Dave:
Yeah, you bet, man. Appreciate you having us on, this has been so fun.
Saad:
Thanks for having us.
Lukas:
If you're enjoying these interviews and you want to learn more, please click on the link to the show notes in the description where you can find links to all the papers that are mentioned, supplemental material and a transcription that we work really hard to produce. So, check it out.