In the latest episode of Industry Spotlight, we discuss Amazon’s newly-launched feature for AWS called ‘Serverless Aurora’. Check out the Flaunt Digital YouTube channel for more videos like this…
See below for the full video transcription.
Jamie: All right, last thing we want to talk about is AWS’s new feature which is Aurora Serverless. So there’s been a big push from Amazon Web Services for serverless stuff on the server side. So this is what’s called “lambda.” I hate the word “serverless” because obviously there’s a server somewhere, it’s not serverless. It’s just what they’re trying to say is that it’s no longer part of something that you need to be provisioning or building or even thinking about. If you deploy a serverless architecture, essentially you just deploy it and then you can just leave it. And it will scale up and down without you touching it.
So in the database line, this is a new concept, really. No ever one’s really had a product that does this before in the cloud or elsewhere, so it’s quite a big deal. And one of the coolest bits of it is it can scale down to absolutely nothing. So if you’ve got a database workload that is sporadic, and sometimes it can go days or weeks without being used at all, then if you put it on this Serverless Aurora, it can just scale down to nothing and just pause. Then you just get charged for the storage, you don’t get charge for computing or memory resource, which is great because essentially it’s pretty much free to do storage on AWS, for all intents and purposes. Unless you’ve got tons and tons of it, which not many people have.
Chris: Why are they introducing that? Are they trying to… surely the idea is to get costs up, rather than down.
Jamie: No. Take it with a pinch of salt, but supposedly AWS are actively working to get everyone’s costs down. That’s one of their big mantras. You go to these big conferences, that’s what they’re trying to tell you. If you pay for support or you pay for an account manager or client manager guy, he will work with you to get your costs down. And there’s quite a few cost efficiency metrics built into the platform to try and make you spend less money.
Chris: That’s interesting. Have you any idea why they would take that sort of spin there?
Jamie: Because it’s a cool spin, it’s what a user wants to hear.
Chris: You think they’re working to… volume, migrate everybody because they’ve got the main USP in the market, get everybody over to it?
Jamie: Yeah, the more customers they can get, the cheaper they can sell it, essentially, because they’re building server farms and putting millions of customers on it, you can sell it cheap can’t you.
Lee: That’s the Amazon.com mantra, isn’t it? Volume.
Jamie: Yeah. So that’s great for cost efficiency because the resource in the back end will just scale, you don’t have to touch it. There’s one massive gotcha that took a bit of research. I haven’t tested this properly on a decent workload but essentially if you let it scale down to zero and then you hit it again, and obviously it needs to provision something fast, it can take up to 25 seconds to provision something from cold, which is too long, really. So you need to factor that in to how you want to do this.
So for example, if you did a staging site for a client and then you put it on this technology, you can’t really have the client hit it from cold and have to wait 25 seconds to see the web page, because they’ll just think it’s broke, won’t they, after five seconds. So it’s all right if it’s an internal staging server or an internal dev server because obviously you know it’s 25 seconds. Come back in half a minute and it will be all right. But for clients and stuff it’s a bit flaky. So not so sure about that. If it were like two seconds or five you might get away with it, but…
Lee: Is it because they can’t do it in two seconds, or is that something that they’re going to introduce?
Jamie: Well it’s only just become generally available, so they work on these things all the time, they have to be on it. Depends on customer feedback, and I’m pretty sure they’ll get some feedback around that. People might be willing to pay more to get faster boot time, basically, faster from cold resume time, I guess they call it. But you can dodge that entirely by just… you can set parameters when you set these things up, these serverless Aurora clusters. You can tell it just to only ever scale down to one ACU, they’re calling it, “Aurora Compute Unit” I think it’s called.
So rather than having it scale to zero, you can have it scale down to one, and there’s a baseline always available. And you can scale up to whatever max limit you set it, so obviously you’re paying all the time for a small amount of compute resource there, but you still get most of the benefits, but you just don’t get that scaling to zero thing, which is probably the coolest feature of this. It’s a bit of a shame that it’s 25 seconds resumption. I need to do some more testing around that but yeah. For an internal workload cool… but yeah, anything client-facing you can’t have a 25 second boot, can you really?
Lee: What’s the use case for it then?
Jamie: So it’d be great for if you’ve got any internal systems. So let’s say you had a tool like our SEO tool that we’ve got. So for that that’s great, because if you were to hit it and go, “Oh, that’s not working,” its like, “Okay, give it 25 seconds, it will start working.” And then once you’ve hit it and it’s fired up, basically, I think it’s a five-minute period before it will go back down to zero. So you’re not going to hit it, wait 25 seconds and have to do it again in a minute. As long as you keep using it it’s just there. So that’s cool.
It works for use cases when you know that that’s how it operates, but yeah. For an outsider or a non-techy person, it’s a bit weird.
Chris: Based on that, do you think it will get strong adoption?
Chris: Or do you think there will be similar complaints about…
Jamie: Yeah, because as your use case…or as long as you’re not interested in it going to zero, going cold, as long as you’re happy with a baseline…say you’re spending fortunes on a similar service per month, but you’ve got down time…whether it’s night time, or on a weekend or whatever, you can still scale it down to one and it still will be a massive cost saving.
Lee: What the big commercial application for it? Like a Netflix, for example.
Jamie: Traffic spikes, that’s the commercial application. So any websites or web apps or anything that has really fluctuating traffic. So if you haven’t got steady site traffic, you’ve got really high traffic around, say, 8 p.m., like Netflix I guess, and then no traffic at 7 a.m., Netflix, then it will scale it for you. And it will scale down and up and you don’t have to touch it. All you’ve got to do is give it a min and max and it will do it all for you, which is really good. So if you’ve got loads of resource provisioned in the database back end, if it’s only getting used half the time then have a look at this.
But yeah, it’s a brand new thing and you have to be careful with these Amazon things that are brand new because sometimes they’re a bit flaky and there’s a few “gotchas” that you need to read about. Like the 25 second thing. But that’s the coolest feature to me is the pause thing to zero resource bit. So it’s a bit of a shame that it’s got that “gotcha,” but I’m sure they’ll work it out. If you can afford the base level to be running all the time then it’s all good anyway. So yeah, I’m going to try this out on a few clients, I think. But it’s exciting times, its uncharted territory for databases, really, so it’s pretty good.
Lee: That’s awesome.
Jamie: Yeah. But yeah the AWS mantra is get the cost down. So sometimes they’ll just cut costs and stuff without even telling you. They always announce at the conferences, I think they’ve done it like fifty times, or might be a hundred times. A hundred times they’ll have just literally cut prices and just apply it to your bill and you just carry on as normal.
Chris: That’s a strange business model, that. They must be trying to attract volume. Obviously having a USP like that and getting everybody to see that advantage and then just obviously making money at scale.
Lee: That’s the whole business model, isn’t it?
Jamie: Yeah, they upgrade servers as well without changing your bill. So they’ll refresh your hardware, whatever, cut your bill, just carry on so you just get better service over time. Things get cheaper, though, in hardware. Plus as they get more customers they invest more and more, then it gets cheaper anyway, if they’re buying it in bulk. Buying hard disks and processors in volume then they can sell them cheaper, basically, rent them cheaper.
Jamie: It’s pretty cool.
Chris: Win-win for everybody, by the sounds of it.
Jamie: Yeah, that’s why it’s a competitive space, but there’s loads of money to be made. So they’re willing do outlandish stuff that sounds a bit bonkers in business terms. But if it gets you customers. Seems to be doing all right for them.
Chris: Yeah. They’re probably thinking lifetime value and stuff like that as well won’t they? That will all be factored into an operation like that…
Jamie: Yeah, it’s pretty hard to migrate across clouds. So yeah.
Chris: If people are satisfied with the service as well that they’re getting over a period of time, I guess the goal is to never let anybody want to leave.
Jamie: Yeah. And that’s why they keep coming out with these things like Aurora Serverless. No one else is doing this. So if you get all your stuff on there, and you want to migrate right away, there’s no alternative elsewhere. So you’re stuck anyway.
Chris: Smart move.
Jamie: So that’s why they keep inventing their own little technologies and little concepts and encouraging people to go on it, whether it’s cost-saving or whatever. And then if you want to migrate away it’s like, “Oh, there’s no equivalent to that” and you’ve got to restructure it, or whatever. So it’s a pretty fast-growing space. It’s going to continue to evolve pretty rapidly in the next five to 10 years. Well forever, probably…